Sample records for empirical modelling techniques

  1. Trend extraction using empirical mode decomposition and statistical empirical mode decomposition: Case study: Kuala Lumpur stock market

    NASA Astrophysics Data System (ADS)

    Jaber, Abobaker M.

    2014-12-01

    Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.

  2. Tracer kinetics of forearm endothelial function: comparison of an empirical method and a quantitative modeling technique.

    PubMed

    Zhao, Xueli; Arsenault, Andre; Lavoie, Kim L; Meloche, Bernard; Bacon, Simon L

    2007-01-01

    Forearm Endothelial Function (FEF) is a marker that has been shown to discriminate patients with cardiovascular disease (CVD). FEF has been assessed using several parameters: the Rate of Uptake Ratio (RUR), EWUR (Elbow-to-Wrist Uptake Ratio) and EWRUR (Elbow-to-Wrist Relative Uptake Ratio). However, the modeling functions of FEF require more robust models. The present study was designed to compare an empirical method with quantitative modeling techniques to better estimate the physiological parameters and understand the complex dynamic processes. The fitted time activity curves of the forearms, estimating blood and muscle components, were assessed using both an empirical method and a two-compartment model. Although correlational analyses suggested a good correlation between the methods for RUR (r=.90) and EWUR (r=.79), but not EWRUR (r=.34), Altman-Bland plots found poor agreement between the methods for all 3 parameters. These results indicate that there is a large discrepancy between the empirical and computational method for FEF. Further work is needed to establish the physiological and mathematical validity of the 2 modeling methods.

  3. Determination of a Limited Scope Network's Lightning Detection Efficiency

    NASA Technical Reports Server (NTRS)

    Rompala, John T.; Blakeslee, R.

    2008-01-01

    This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.

  4. An Empirical Study of a Solo Performance Assessment Model

    ERIC Educational Resources Information Center

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  5. Symposium review: Uncertainties in enteric methane inventories, measurement techniques, and prediction models.

    PubMed

    Hristov, A N; Kebreab, E; Niu, M; Oh, J; Bannink, A; Bayat, A R; Boland, T B; Brito, A F; Casper, D P; Crompton, L A; Dijkstra, J; Eugène, M; Garnsworthy, P C; Haque, N; Hellwing, A L F; Huhtanen, P; Kreuzer, M; Kuhla, B; Lund, P; Madsen, J; Martin, C; Moate, P J; Muetzel, S; Muñoz, C; Peiren, N; Powell, J M; Reynolds, C K; Schwarm, A; Shingfield, K J; Storlien, T M; Weisbjerg, M R; Yáñez-Ruiz, D R; Yu, Z

    2018-04-18

    Ruminant production systems are important contributors to anthropogenic methane (CH 4 ) emissions, but there are large uncertainties in national and global livestock CH 4 inventories. Sources of uncertainty in enteric CH 4 emissions include animal inventories, feed dry matter intake (DMI), ingredient and chemical composition of the diets, and CH 4 emission factors. There is also significant uncertainty associated with enteric CH 4 measurements. The most widely used techniques are respiration chambers, the sulfur hexafluoride (SF 6 ) tracer technique, and the automated head-chamber system (GreenFeed; C-Lock Inc., Rapid City, SD). All 3 methods have been successfully used in a large number of experiments with dairy or beef cattle in various environmental conditions, although studies that compare techniques have reported inconsistent results. Although different types of models have been developed to predict enteric CH 4 emissions, relatively simple empirical (statistical) models have been commonly used for inventory purposes because of their broad applicability and ease of use compared with more detailed empirical and process-based mechanistic models. However, extant empirical models used to predict enteric CH 4 emissions suffer from narrow spatial focus, limited observations, and limitations of the statistical technique used. Therefore, prediction models must be developed from robust data sets that can only be generated through collaboration of scientists across the world. To achieve high prediction accuracy, these data sets should encompass a wide range of diets and production systems within regions and globally. Overall, enteric CH 4 prediction models are based on various animal or feed characteristic inputs but are dominated by DMI in one form or another. As a result, accurate prediction of DMI is essential for accurate prediction of livestock CH 4 emissions. Analysis of a large data set of individual dairy cattle data showed that simplified enteric CH 4 prediction models based on DMI alone or DMI and limited feed- or animal-related inputs can predict average CH 4 emission with a similar accuracy to more complex empirical models. These simplified models can be reliably used for emission inventory purposes. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

  6. Comparison of modelled and empirical atmospheric propagation data

    NASA Technical Reports Server (NTRS)

    Schott, J. R.; Biegel, J. D.

    1983-01-01

    The radiometric integrity of TM thermal infrared channel data was evaluated and monitored to develop improved radiometric preprocessing calibration techniques for removal of atmospheric effects. Modelled atmospheric transmittance and path radiance were compared with empirical values derived from aircraft underflight data. Aircraft thermal infrared imagery and calibration data were available on two dates as were corresponding atmospheric radiosonde data. The radiosonde data were used as input to the LOWTRAN 5A code which was modified to output atmospheric path radiance in addition to transmittance. The aircraft data were calibrated and used to generate analogous measurements. These data indicate that there is a tendancy for the LOWTRAN model to underestimate atmospheric path radiance and transmittance as compared to empirical data. A plot of transmittance versus altitude for both LOWTRAN and empirical data is presented.

  7. Modeling, simulation, and estimation of optical turbulence

    NASA Astrophysics Data System (ADS)

    Formwalt, Byron Paul

    This dissertation documents three new contributions to simulation and modeling of optical turbulence. The first contribution is the formalization, optimization, and validation of a modeling technique called successively conditioned rendering (SCR). The SCR technique is empirically validated by comparing the statistical error of random phase screens generated with the technique. The second contribution is the derivation of the covariance delineation theorem, which provides theoretical bounds on the error associated with SCR. It is shown empirically that the theoretical bound may be used to predict relative algorithm performance. Therefore, the covariance delineation theorem is a powerful tool for optimizing SCR algorithms. For the third contribution, we introduce a new method for passively estimating optical turbulence parameters, and demonstrate the method using experimental data. The technique was demonstrated experimentally, using a 100 m horizontal path at 1.25 m above sun-heated tarmac on a clear afternoon. For this experiment, we estimated C2n ≈ 6.01 · 10-9 m-23 , l0 ≈ 17.9 mm, and L0 ≈ 15.5 m.

  8. Prediction of Partition Coefficients of Organic Compounds between SPME/PDMS and Aqueous Solution

    PubMed Central

    Chao, Keh-Ping; Lu, Yu-Ting; Yang, Hsiu-Wen

    2014-01-01

    Polydimethylsiloxane (PDMS) is commonly used as the coated polymer in the solid phase microextraction (SPME) technique. In this study, the partition coefficients of organic compounds between SPME/PDMS and the aqueous solution were compiled from the literature sources. The correlation analysis for partition coefficients was conducted to interpret the effect of their physicochemical properties and descriptors on the partitioning process. The PDMS-water partition coefficients were significantly correlated to the polarizability of organic compounds (r = 0.977, p < 0.05). An empirical model, consisting of the polarizability, the molecular connectivity index, and an indicator variable, was developed to appropriately predict the partition coefficients of 61 organic compounds for the training set. The predictive ability of the empirical model was demonstrated by using it on a test set of 26 chemicals not included in the training set. The empirical model, applying the straightforward calculated molecular descriptors, for estimating the PDMS-water partition coefficient will contribute to the practical applications of the SPME technique. PMID:24534804

  9. [Mobbing: a meta-analysis and integrative model of its antecedents and consequences].

    PubMed

    Topa Cantisano, Gabriela; Depolo, Marco; Morales Domínguez, J Francisco

    2007-02-01

    Although mobbing has been extensively studied, empirical research has not led to firm conclusions regarding its antecedents and consequences, both at personal and organizational levels. An extensive literature search yielded 86 empirical studies with 93 samples. The matrix correlation obtained through meta-analytic techniques was used to test a structural equation model. Results supported hypotheses regarding organizational environmental factors as main predictors of mobbing.

  10. Analytical techniques for the study of some parameters of multispectral scanner systems for remote sensing

    NASA Technical Reports Server (NTRS)

    Wiswell, E. R.; Cooper, G. R. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The concept of average mutual information in the received spectral random process about the spectral scene was developed. Techniques amenable to implementation on a digital computer were also developed to make the required average mutual information calculations. These techniques required identification of models for the spectral response process of scenes. Stochastic modeling techniques were adapted for use. These techniques were demonstrated on empirical data from wheat and vegetation scenes.

  11. An Application of Structural Equation Modeling for Developing Good Teaching Characteristics Ontology

    ERIC Educational Resources Information Center

    Phiakoksong, Somjin; Niwattanakul, Suphakit; Angskun, Thara

    2013-01-01

    Ontology is a knowledge representation technique which aims to make knowledge explicit by defining the core concepts and their relationships. The Structural Equation Modeling (SEM) is a statistical technique which aims to explore the core factors from empirical data and estimates the relationship between these factors. This article presents an…

  12. A simple semi-empirical technique for apportioning the impact of roadways on air quality in an urban neighbourhood

    NASA Astrophysics Data System (ADS)

    Elangasinghe, M. A.; Dirks, K. N.; Singhal, N.; Costello, S. B.; Longley, I.; Salmond, J. A.

    2014-02-01

    Air pollution from the transport sector has a marked effect on human health, so isolating the pollutant contribution from a roadway is important in understanding its impact on the local neighbourhood. This paper proposes a novel technique based on a semi-empirical air pollution model to quantify the impact from a roadway on the air quality of a local neighbourhood using ambient records of a single air pollution monitor. We demonstrate the proposed technique using a case study, in which we quantify the contribution from a major highway with respect to the local background concentration in Auckland, New Zealand. Comparing the diurnal variation of the model-separated background contribution with real measurements from a site upwind of the highway shows that the model estimates are reliable. Amongst all of the pollutants considered, the best estimations of the background were achieved for nitrogen oxides. Although the multi-pronged approach worked well for predominantly vehicle-related pollutants, it could not be used effectively to isolate emissions of PM10 due to the complex and less predictable influence of natural sources (such as marine aerosols). The proposed approach is useful in situations where ambient records from an upwind background station are not available (as required by other techniques) and is potentially transferable to situations such as intersections and arterial roads. Applying this technique to longer time series could help to understand the changes in pollutant concentrations from the road and background sources for different emission scenarios, for different years or seasons. Modelling results also show the potential of such a hybrid semi-empirical models to contribute to our understanding of the physical parameters determining air quality and to validate emissions inventory data.

  13. An Empirical Bayes Approach to Spatial Analysis

    NASA Technical Reports Server (NTRS)

    Morris, C. N.; Kostal, H.

    1983-01-01

    Multi-channel LANDSAT data are collected in several passes over agricultural areas during the growing season. How empirical Bayes modeling can be used to develop crop identification and discrimination techniques that account for spatial correlation in such data is considered. The approach models the unobservable parameters and the data separately, hoping to take advantage of the fact that the bulk of spatial correlation lies in the parameter process. The problem is then framed in terms of estimating posterior probabilities of crop types for each spatial area. Some empirical Bayes spatial estimation methods are used to estimate the logits of these probabilities.

  14. Perceived sexual harassment at work: meta-analysis and structural model of antecedents and consequences.

    PubMed

    Topa Cantisano, Gabriela; Morales Domínguez, J F; Depolo, Marco

    2008-05-01

    Although sexual harassment has been extensively studied, empirical research has not led to firm conclusions about its antecedents and consequences, both at the personal and organizational level. An extensive literature search yielded 42 empirical studies with 60 samples. The matrix correlation obtained through meta-analytic techniques was used to test a structural equation model. Results supported the hypotheses regarding organizational environmental factors as main predictors of harassment.

  15. Application of empirical mode decomposition with local linear quantile regression in financial time series forecasting.

    PubMed

    Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M

    2014-01-01

    This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.

  16. Input Consistency in the Acquisition of Questions in Bulgarian and English: A Hypothesis Testing Model

    ERIC Educational Resources Information Center

    Tornyova, Lidiya

    2011-01-01

    The goal of this dissertation is to address several major empirical and theoretical issues related to English-speaking children's difficulties with auxiliary use and inversion in questions. The empirical data on English question acquisition are inconsistent due to differences in methods and techniques used. A range of proposals about the source of…

  17. Identifying Complex Dynamics in Social Systems: A New Methodological Approach Applied to Study School Segregation

    ERIC Educational Resources Information Center

    Spaiser, Viktoria; Hedström, Peter; Ranganathan, Shyam; Jansson, Kim; Nordvik, Monica K.; Sumpter, David J. T.

    2018-01-01

    It is widely recognized that segregation processes are often the result of complex nonlinear dynamics. Empirical analyses of complex dynamics are however rare, because there is a lack of appropriate empirical modeling techniques that are capable of capturing complex patterns and nonlinearities. At the same time, we know that many social phenomena…

  18. AGENT-BASED MODELS IN EMPIRICAL SOCIAL RESEARCH*

    PubMed Central

    Bruch, Elizabeth; Atwell, Jon

    2014-01-01

    Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first discuss the motivations for using agent-based models in both basic science and policy-oriented social research. Next, we provide an overview of methods and strategies for incorporating data on behavior and populations into agent-based models, and review techniques for validating and testing the sensitivity of agent-based models. We close with suggested directions for future research. PMID:25983351

  19. Verification of GCM-generated regional seasonal precipitation for current climate and of statistical downscaling estimates under changing climate conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busuioc, A.; Storch, H. von; Schnur, R.

    Empirical downscaling procedures relate large-scale atmospheric features with local features such as station rainfall in order to facilitate local scenarios of climate change. The purpose of the present paper is twofold: first, a downscaling technique is used as a diagnostic tool to verify the performance of climate models on the regional scale; second, a technique is proposed for verifying the validity of empirical downscaling procedures in climate change applications. The case considered is regional seasonal precipitation in Romania. The downscaling model is a regression based on canonical correlation analysis between observed station precipitation and European-scale sea level pressure (SLP). Themore » climate models considered here are the T21 and T42 versions of the Hamburg ECHAM3 atmospheric GCM run in time-slice mode. The climate change scenario refers to the expected time of doubled carbon dioxide concentrations around the year 2050. Generally, applications of statistical downscaling to climate change scenarios have been based on the assumption that the empirical link between the large-scale and regional parameters remains valid under a changed climate. In this study, a rationale is proposed for this assumption by showing the consistency of the 2 x CO{sub 2} GCM scenarios in winter, derived directly from the gridpoint data, with the regional scenarios obtained through empirical downscaling. Since the skill of the GCMs in regional terms is already established, it is concluded that the downscaling technique is adequate for describing climatically changing regional and local conditions, at least for precipitation in Romania during winter.« less

  20. Ultrasonic nondestructive evaluation, microstructure, and mechanical property interrelations

    NASA Technical Reports Server (NTRS)

    Vary, A.

    1984-01-01

    Ultrasonic techniques for mechanical property characterizations are reviewed and conceptual models are advanced for explaining and interpreting the empirically based results. At present, the technology is generally empirically based and is emerging from the research laboratory. Advancement of the technology will require establishment of theoretical foundations for the experimentally observed interrelations among ultrasonic measurements, mechanical properties, and microstructure. Conceptual models are applied to ultrasonic assessment of fracture toughness to illustrate an approach for predicting correlations found among ultrasonic measurements, microstructure, and mechanical properties.

  1. SEASONAL AND REGIONAL VARIATIONS OF PRIMARY AND SECONDARY ORGANIC AEROSOLS OVER THE CONTINENTAL UNITED STATES: SEMI-EMPIRICAL ESTIMATES AND MODEL EVALUATION

    EPA Science Inventory

    Seasonal and regional variations of primary (OCpri) and secondary (OCsec) organic carbon aerosols across the continental U.S. for the year 2001 were examined by a semi-empirical technique using observed OC and elemental carbon (EC) data from 142 routine moni...

  2. Forecasting stochastic neural network based on financial empirical mode decomposition.

    PubMed

    Wang, Jie; Wang, Jun

    2017-06-01

    In an attempt to improve the forecasting accuracy of stock price fluctuations, a new one-step-ahead model is developed in this paper which combines empirical mode decomposition (EMD) with stochastic time strength neural network (STNN). The EMD is a processing technique introduced to extract all the oscillatory modes embedded in a series, and the STNN model is established for considering the weight of occurrence time of the historical data. The linear regression performs the predictive availability of the proposed model, and the effectiveness of EMD-STNN is revealed clearly through comparing the predicted results with the traditional models. Moreover, a new evaluated method (q-order multiscale complexity invariant distance) is applied to measure the predicted results of real stock index series, and the empirical results show that the proposed model indeed displays a good performance in forecasting stock market fluctuations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Development of a nonlinear switching function and its application to static lift characteristics of straight wings

    NASA Technical Reports Server (NTRS)

    Hewes, D. E.

    1978-01-01

    A mathematical modeling technique was developed for the lift characteristics of straight wings throughout a very wide angle of attack range. The technique employs a mathematical switching function that facilitates the representation of the nonlinear aerodynamic characteristics in the partially and fully stalled regions and permits matching empirical data within + or - 4 percent of maximum values. Although specifically developed for use in modeling the lift characteristics, the technique appears to have other applications in both aerodynamic and nonaerodynamic fields.

  4. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  5. Empirical Equation Based Chirality (n, m) Assignment of Semiconducting Single Wall Carbon Nanotubes from Resonant Raman Scattering Data

    PubMed Central

    Arefin, Md Shamsul

    2012-01-01

    This work presents a technique for the chirality (n, m) assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n− m) with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m) of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot. PMID:28348319

  6. The Application of Various Nonlinear Models to Describe Academic Growth Trajectories: An Empirical Analysis Using Four-Wave Longitudinal Achievement Data from a Large Urban School District

    ERIC Educational Resources Information Center

    Shin, Tacksoo

    2012-01-01

    This study introduced various nonlinear growth models, including the quadratic conventional polynomial model, the fractional polynomial model, the Sigmoid model, the growth model with negative exponential functions, the multidimensional scaling technique, and the unstructured growth curve model. It investigated which growth models effectively…

  7. The Use of Empirical Methods for Testing Granular Materials in Analogue Modelling

    PubMed Central

    Montanari, Domenico; Agostini, Andrea; Bonini, Marco; Corti, Giacomo; Del Ventisette, Chiara

    2017-01-01

    The behaviour of a granular material is mainly dependent on its frictional properties, angle of internal friction, and cohesion, which, together with material density, are the key factors to be considered during the scaling procedure of analogue models. The frictional properties of a granular material are usually investigated by means of technical instruments such as a Hubbert-type apparatus and ring shear testers, which allow for investigating the response of the tested material to a wide range of applied stresses. Here we explore the possibility to determine material properties by means of different empirical methods applied to mixtures of quartz and K-feldspar sand. Empirical methods exhibit the great advantage of measuring the properties of a certain analogue material under the experimental conditions, which are strongly sensitive to the handling techniques. Finally, the results obtained from the empirical methods have been compared with ring shear tests carried out on the same materials, which show a satisfactory agreement with those determined empirically. PMID:28772993

  8. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  9. Are stock market returns related to the weather effects? Empirical evidence from Taiwan

    NASA Astrophysics Data System (ADS)

    Chang, Tsangyao; Nieh, Chien-Chung; Yang, Ming Jing; Yang, Tse-Yu

    2006-05-01

    In this study, we employ a recently developed econometric technique of the threshold model with the GJR-GARCH process on error terms to investigate the relationships between weather factors and stock market returns in Taiwan using daily data for the period of 1 July 1997-22 October 2003. The major weather factors studied include temperature, humidity, and cloud cover. Our empirical evidence shows that temperature and cloud cover are two important weather factors that affect the stock returns in Taiwan. Our empirical findings further support the previous arguments that advocate the inclusion of economically neutral behavioral variables in asset pricing models. These results also have significant implications for individual investors and financial institutions planning to invest in the Taiwan stock market.

  10. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  11. Terahertz Radiation: A Non-contact Tool for the Selective Stimulation of Biological Responses in Human Cells

    DTIC Science & Technology

    2014-01-01

    computational and empirical dosimetric tools [31]. For the computational dosimetry, we employed finite-dif- ference time- domain (FDTD) modeling techniques to...temperature-time data collected for a well exposed to THz radiation using finite-difference time- domain (FDTD) modeling techniques and thermocouples... like )). Alter- ation in the expression of such genes underscores the signif- 62 IEEE TRANSACTIONS ON TERAHERTZ SCIENCE AND TECHNOLOGY, VOL. 6, NO. 1

  12. Modeling and Simulation of Upset-Inducing Disturbances for Digital Systems in an Electromagnetic Reverberation Chamber

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report describes a modeling and simulation approach for disturbance patterns representative of the environment experienced by a digital system in an electromagnetic reverberation chamber. The disturbance is modeled by a multi-variate statistical distribution based on empirical observations. Extended versions of the Rejection Samping and Inverse Transform Sampling techniques are developed to generate multi-variate random samples of the disturbance. The results show that Inverse Transform Sampling returns samples with higher fidelity relative to the empirical distribution. This work is part of an ongoing effort to develop a resilience assessment methodology for complex safety-critical distributed systems.

  13. Predicting field weed emergence with empirical models and soft computing techniques

    USDA-ARS?s Scientific Manuscript database

    Seedling emergence is the most important phenological process that influences the success of weed species; therefore, predicting weed emergence timing plays a critical role in scheduling weed management measures. Important efforts have been made in the attempt to develop models to predict seedling e...

  14. Measuring soil moisture with imaging radars

    NASA Technical Reports Server (NTRS)

    Dubois, Pascale C.; Vanzyl, Jakob; Engman, Ted

    1995-01-01

    An empirical model was developed to infer soil moisture and surface roughness from radar data. The accuracy of the inversion technique is assessed by comparing soil moisture obtained with the inversion technique to in situ measurements. The effect of vegetation on the inversion is studied and a method to eliminate the areas where vegetation impairs the algorithm is described.

  15. Empirical Guidelines for Use of Irregular Wave Model to Estimate Nearshore Wave Height.

    DTIC Science & Technology

    1982-07-01

    height, the easier to use tech- nique presented by McClenan (1975) was employed. The McClenan technique uti- lizes a monogram which was constructed from...the SPM equations and gives the same results. The inputs to the monogram technique are the period, the deep- water wave height, the deepwater wave

  16. Modeling dynamic beta-gamma polymorphic transition in Tin

    NASA Astrophysics Data System (ADS)

    Chauvin, Camille; Montheillet, Frank; Petit, Jacques; CEA Gramat Collaboration; EMSE Collaboration

    2015-06-01

    Solid-solid phase transitions in metals have been studied by shock waves techniques for many decades. Recent experiments have investigated the transition during isentropic compression experiments and shock-wave compression and have highlighted the strong influence of the loading rate on the transition. Complementary data obtained with velocity and temperature measurements around the polymorphic transition beta-gamma of Tin on gas gun experiments have displayed the importance of the kinetics of the transition. But, even though this phenomenon is known, modeling the kinetic remains complex and based on empirical formulations. A multiphase EOS is available in our 1D Lagrangian code Unidim. We propose to present the influence of various kinetic laws (either empirical or involving nucleation and growth mechanisms) and their parameters (Gibbs free energy, temperature, pressure) on the transformation rate. We compare experimental and calculated velocities and temperature profiles and we underline the effects of the empirical parameters of these models.

  17. Ray Tracing Methods in Seismic Emission Tomography

    NASA Astrophysics Data System (ADS)

    Chebotareva, I. Ya.

    2018-03-01

    Highly efficient approximate ray tracing techniques which can be used in seismic emission tomography and in other methods requiring a large number of raypaths are described. The techniques are applicable for the gradient and plane-layered velocity sections of the medium and for the models with a complicated geometry of contrasting boundaries. The empirical results obtained with the use of the discussed ray tracing technologies and seismic emission tomography results, as well as the results of numerical modeling, are presented.

  18. Diagnostic Procedures for Detecting Nonlinear Relationships between Latent Variables

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Baldasaro, Ruth E.; Gottfredson, Nisha C.

    2012-01-01

    Structural equation models are commonly used to estimate relationships between latent variables. Almost universally, the fitted models specify that these relationships are linear in form. This assumption is rarely checked empirically, largely for lack of appropriate diagnostic techniques. This article presents and evaluates two procedures that can…

  19. Traveltime budgets and mobility in urban areas

    DOT National Transportation Integrated Search

    1974-05-01

    The study tests by empirical comparative analysis the concept that tripmakers have a stable daily traveltime budget and discusses the implication of such a budget to transportation modeling techniques and the evaluation of alternative transportation ...

  20. An empirical approach to estimate near-infra-red photon propagation and optically induced drug release in brain tissues

    NASA Astrophysics Data System (ADS)

    Prabhu Verleker, Akshay; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M.

    2015-03-01

    The purpose of this study is to develop an alternate empirical approach to estimate near-infra-red (NIR) photon propagation and quantify optically induced drug release in brain metastasis, without relying on computationally expensive Monte Carlo techniques (gold standard). Targeted drug delivery with optically induced drug release is a noninvasive means to treat cancers and metastasis. This study is part of a larger project to treat brain metastasis by delivering lapatinib-drug-nanocomplexes and activating NIR-induced drug release. The empirical model was developed using a weighted approach to estimate photon scattering in tissues and calibrated using a GPU based 3D Monte Carlo. The empirical model was developed and tested against Monte Carlo in optical brain phantoms for pencil beams (width 1mm) and broad beams (width 10mm). The empirical algorithm was tested against the Monte Carlo for different albedos along with diffusion equation and in simulated brain phantoms resembling white-matter (μs'=8.25mm-1, μa=0.005mm-1) and gray-matter (μs'=2.45mm-1, μa=0.035mm-1) at wavelength 800nm. The goodness of fit between the two models was determined using coefficient of determination (R-squared analysis). Preliminary results show the Empirical algorithm matches Monte Carlo simulated fluence over a wide range of albedo (0.7 to 0.99), while the diffusion equation fails for lower albedo. The photon fluence generated by empirical code matched the Monte Carlo in homogeneous phantoms (R2=0.99). While GPU based Monte Carlo achieved 300X acceleration compared to earlier CPU based models, the empirical code is 700X faster than the Monte Carlo for a typical super-Gaussian laser beam.

  1. Energy risk in the arbitrage pricing model: an empirical and theoretical study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, M.A.

    1986-01-01

    This dissertation empirically explores the Arbitrage Pricing Theory in the context of energy risk for securities over the 1960s, 1970s, and early 1980s. Starting from a general multifactor pricing model, the paper develops a two factor model based on a market-like factor and an energy factor. This model is then tested on portfolios of securities grouped according to industrial classification using several econometric techniques designed to overcome some of the more serious estimation problems common to these models. The paper concludes that energy risk is priced in the 1970s and possibly even in the 1960s. Energy risk is found tomore » be priced in the sense that investors who hold assets subjected to energy risk are paid for this risk. The classic version of the Capital Asset Pricing Model which posits the market as the single priced factor is rejected in favor of the Arbitrage Pricing Theory or multi-beta versions of the Capital Asset Pricing Model. The study introduces some original econometric methodology to carry out empirical tests.« less

  2. Machine learning strategies for systems with invariance properties

    NASA Astrophysics Data System (ADS)

    Ling, Julia; Jones, Reese; Templeton, Jeremy

    2016-08-01

    In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs.

  3. Markov modeling and discrete event simulation in health care: a systematic comparison.

    PubMed

    Standfield, Lachlan; Comans, Tracy; Scuffham, Paul

    2014-04-01

    The aim of this study was to assess if the use of Markov modeling (MM) or discrete event simulation (DES) for cost-effectiveness analysis (CEA) may alter healthcare resource allocation decisions. A systematic literature search and review of empirical and non-empirical studies comparing MM and DES techniques used in the CEA of healthcare technologies was conducted. Twenty-two pertinent publications were identified. Two publications compared MM and DES models empirically, one presented a conceptual DES and MM, two described a DES consensus guideline, and seventeen drew comparisons between MM and DES through the authors' experience. The primary advantages described for DES over MM were the ability to model queuing for limited resources, capture individual patient histories, accommodate complexity and uncertainty, represent time flexibly, model competing risks, and accommodate multiple events simultaneously. The disadvantages of DES over MM were the potential for model overspecification, increased data requirements, specialized expensive software, and increased model development, validation, and computational time. Where individual patient history is an important driver of future events an individual patient simulation technique like DES may be preferred over MM. Where supply shortages, subsequent queuing, and diversion of patients through other pathways in the healthcare system are likely to be drivers of cost-effectiveness, DES modeling methods may provide decision makers with more accurate information on which to base resource allocation decisions. Where these are not major features of the cost-effectiveness question, MM remains an efficient, easily validated, parsimonious, and accurate method of determining the cost-effectiveness of new healthcare interventions.

  4. Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…

  5. Modeling and managing risk early in software development

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Thomas, William M.; Hetmanski, Christopher J.

    1993-01-01

    In order to improve the quality of the software development process, we need to be able to build empirical multivariate models based on data collectable early in the software process. These models need to be both useful for prediction and easy to interpret, so that remedial actions may be taken in order to control and optimize the development process. We present an automated modeling technique which can be used as an alternative to regression techniques. We show how it can be used to facilitate the identification and aid the interpretation of the significant trends which characterize 'high risk' components in several Ada systems. Finally, we evaluate the effectiveness of our technique based on a comparison with logistic regression based models.

  6. Achilles tendons from decorin- and biglycan-null mouse models have inferior mechanical and structural properties predicted by an image-based empirical damage model

    PubMed Central

    Gordon, J.A.; Freedman, B.R.; Zuskov, A.; Iozzo, R.V.; Birk, D.E.; Soslowsky, L.J.

    2015-01-01

    Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn−/−) and biglycan-null (Bgn−/−) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. PMID:25888014

  7. Achilles tendons from decorin- and biglycan-null mouse models have inferior mechanical and structural properties predicted by an image-based empirical damage model.

    PubMed

    Gordon, J A; Freedman, B R; Zuskov, A; Iozzo, R V; Birk, D E; Soslowsky, L J

    2015-07-16

    Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn(-/-)) and biglycan-null (Bgn(-/-)) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Complex Dynamics in Nonequilibrium Economics and Chemistry

    NASA Astrophysics Data System (ADS)

    Wen, Kehong

    Complex dynamics provides a new approach in dealing with economic complexity. We study interactively the empirical and theoretical aspects of business cycles. The way of exploring complexity is similar to that in the study of an oscillatory chemical system (BZ system)--a model for modeling complex behavior. We contribute in simulating qualitatively the complex periodic patterns observed from the controlled BZ experiments to narrow the gap between modeling and experiment. The gap between theory and reality is much wider in economics, which involves studies of human expectations and decisions, the essential difference from natural sciences. Our empirical and theoretical studies make substantial progress in closing this gap. With the help from the new development in nonequilibrium physics, i.e., the complex spectral theory, we advance our technique in detecting characteristic time scales from empirical economic data. We obtain correlation resonances, which give oscillating modes with decays for correlation decomposition, from different time series including S&P 500, M2, crude oil spot prices, and GNP. The time scales found are strikingly compatible with business experiences and other studies in business cycles. They reveal the non-Markovian nature of coherent markets. The resonances enhance the evidence of economic chaos obtained by using other tests. The evolving multi-humped distributions produced by the moving-time -window technique reveal the nonequilibrium nature of economic behavior. They reproduce the American economic history of booms and busts. The studies seem to provide a way out of the debate on chaos versus noise and unify the cyclical and stochastic approaches in explaining business fluctuations. Based on these findings and new expectation formulation, we construct a business cycle model which gives qualitatively compatible patterns to those found empirically. The soft-bouncing oscillator model provides a better alternative than the harmonic oscillator or the random walk model as the building block in business cycle theory. The mathematical structure of the model (delay differential equation) is studied analytically and numerically. The research pave the way toward sensible economic forecasting.

  9. Development of an empirically based dynamic biomechanical strength model

    NASA Technical Reports Server (NTRS)

    Pandya, A.; Maida, J.; Aldridge, A.; Hasson, S.; Woolford, B.

    1992-01-01

    The focus here is on the development of a dynamic strength model for humans. Our model is based on empirical data. The shoulder, elbow, and wrist joints are characterized in terms of maximum isolated torque, position, and velocity in all rotational planes. This information is reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining the torque as a function of position and velocity. The isolated joint torque equations are then used to compute forces resulting from a composite motion, which in this case is a ratchet wrench push and pull operation. What is presented here is a comparison of the computed or predicted results of the model with the actual measured values for the composite motion.

  10. Enhancing Teaching Effectiveness Using Experiential Techniques: Model Development and Empirical Evaluation.

    ERIC Educational Resources Information Center

    Wagner, Richard J.; And Others

    In U.S. colleges and universities, much attention has been focused on the need to improve teaching quality and to involve students in the learning process. At the same time, many faculty members are faced with growing class sizes and with time pressures due to research demands. One useful technique is to divide the class into small groups and…

  11. Semi-empirical and phenomenological instrument functions for the scanning tunneling microscope

    NASA Astrophysics Data System (ADS)

    Feuchtwang, T. E.; Cutler, P. H.; Notea, A.

    1988-08-01

    Recent progress in the development of a convenient algorithm for the determination of a quantitative local density of states (LDOS) of the sample, from data measured in the STM, is reviewd. It is argued that the sample LDOS strikes a good balance between the information content of a surface characteristic and effort required to obtain it experimentally. Hence, procedures to determine the sample LDOS as directly and as tip-model independently as possible are emphasized. The solution of the STM's "inverse" problem in terms of novel versions of the instrument (or Green) function technique is considered in preference to the well known, more direct solutions. Two types of instrument functions are considered: Approximations of the basic tip-instrument function obtained from the transfer Hamiltonian theory of the STM-STS. And, phenomenological instrument functions devised as a systematic scheme for semi-empirical first order corrections of "ideal" models. The instrument function, in this case, describes the corrections as the response of an independent component of the measuring apparatus inserted between the "ideal" instrument and the measured data. This linear response theory of measurement is reviewed and applied. A procedure for the estimation of the consistency of the model and the systematic errors due to the use of an approximate instrument function is presented. The independence of the instrument function techniques from explicit microscopic models of the tip is noted. The need for semi-empirical, as opposed to strictly empirical or analytical determination of the instrument function is discussed. The extension of the theory to the scanning tunneling spectrometer is noted, as well as its use in a theory of resolution.

  12. Data mining in forecasting PVT correlations of crude oil systems based on Type1 fuzzy logic inference systems

    NASA Astrophysics Data System (ADS)

    El-Sebakhy, Emad A.

    2009-09-01

    Pressure-volume-temperature properties are very important in the reservoir engineering computations. There are many empirical approaches for predicting various PVT properties based on empirical correlations and statistical regression models. Last decade, researchers utilized neural networks to develop more accurate PVT correlations. These achievements of neural networks open the door to data mining techniques to play a major role in oil and gas industry. Unfortunately, the developed neural networks correlations are often limited, and global correlations are usually less accurate compared to local correlations. Recently, adaptive neuro-fuzzy inference systems have been proposed as a new intelligence framework for both prediction and classification based on fuzzy clustering optimization criterion and ranking. This paper proposes neuro-fuzzy inference systems for estimating PVT properties of crude oil systems. This new framework is an efficient hybrid intelligence machine learning scheme for modeling the kind of uncertainty associated with vagueness and imprecision. We briefly describe the learning steps and the use of the Takagi Sugeno and Kang model and Gustafson-Kessel clustering algorithm with K-detected clusters from the given database. It has featured in a wide range of medical, power control system, and business journals, often with promising results. A comparative study will be carried out to compare their performance of this new framework with the most popular modeling techniques, such as neural networks, nonlinear regression, and the empirical correlations algorithms. The results show that the performance of neuro-fuzzy systems is accurate, reliable, and outperform most of the existing forecasting techniques. Future work can be achieved by using neuro-fuzzy systems for clustering the 3D seismic data, identification of lithofacies types, and other reservoir characterization.

  13. Empirical validation of an agent-based model of wood markets in Switzerland

    PubMed Central

    Hilty, Lorenz M.; Lemm, Renato; Thees, Oliver

    2018-01-01

    We present an agent-based model of wood markets and show our efforts to validate this model using empirical data from different sources, including interviews, workshops, experiments, and official statistics. Own surveys closed gaps where data was not available. Our approach to model validation used a variety of techniques, including the replication of historical production amounts, prices, and survey results, as well as a historical case study of a large sawmill entering the market and becoming insolvent only a few years later. Validating the model using this case provided additional insights, showing how the model can be used to simulate scenarios of resource availability and resource allocation. We conclude that the outcome of the rigorous validation qualifies the model to simulate scenarios concerning resource availability and allocation in our study region. PMID:29351300

  14. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  15. Modeling techniques for quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Jirauschek, Christian; Kubis, Tillmann

    2014-03-01

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  16. Modeling techniques for quantum cascade lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jirauschek, Christian; Kubis, Tillmann

    2014-03-15

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation ofmore » quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.« less

  17. Assessing Tinto's Model of Institutional Departure Using American Indian and Alaskan Native Longitudinal Data. ASHE Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Pavel, D. Michael

    This paper on postsecondary outcomes illustrates a technique to determine whether or not mainstream models are appropriate for predicting educational outcomes of American Indians (AIs) and Alaskan Native (ANs). It introduces a prominent statistical procedure to assess models with empirical data and shows how the results can have implications for…

  18. A Bayesian methodological framework for accommodating interannual variability of nutrient loading with the SPARROW model

    NASA Astrophysics Data System (ADS)

    Wellen, Christopher; Arhonditsis, George B.; Labencki, Tanya; Boyd, Duncan

    2012-10-01

    Regression-type, hybrid empirical/process-based models (e.g., SPARROW, PolFlow) have assumed a prominent role in efforts to estimate the sources and transport of nutrient pollution at river basin scales. However, almost no attempts have been made to explicitly accommodate interannual nutrient loading variability in their structure, despite empirical and theoretical evidence indicating that the associated source/sink processes are quite variable at annual timescales. In this study, we present two methodological approaches to accommodate interannual variability with the Spatially Referenced Regressions on Watershed attributes (SPARROW) nonlinear regression model. The first strategy uses the SPARROW model to estimate a static baseline load and climatic variables (e.g., precipitation) to drive the interannual variability. The second approach allows the source/sink processes within the SPARROW model to vary at annual timescales using dynamic parameter estimation techniques akin to those used in dynamic linear models. Model parameterization is founded upon Bayesian inference techniques that explicitly consider calibration data and model uncertainty. Our case study is the Hamilton Harbor watershed, a mixed agricultural and urban residential area located at the western end of Lake Ontario, Canada. Our analysis suggests that dynamic parameter estimation is the more parsimonious of the two strategies tested and can offer insights into the temporal structural changes associated with watershed functioning. Consistent with empirical and theoretical work, model estimated annual in-stream attenuation rates varied inversely with annual discharge. Estimated phosphorus source areas were concentrated near the receiving water body during years of high in-stream attenuation and dispersed along the main stems of the streams during years of low attenuation, suggesting that nutrient source areas are subject to interannual variability.

  19. Estimated correlation matrices and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Pafka, Szilárd; Kondor, Imre

    2004-11-01

    Correlations of returns on various assets play a central role in financial theory and also in many practical applications. From a theoretical point of view, the main interest lies in the proper description of the structure and dynamics of correlations, whereas for the practitioner the emphasis is on the ability of the models to provide adequate inputs for the numerous portfolio and risk management procedures used in the financial industry. The theory of portfolios, initiated by Markowitz, has suffered from the “curse of dimensions” from the very outset. Over the past decades a large number of different techniques have been developed to tackle this problem and reduce the effective dimension of large bank portfolios, but the efficiency and reliability of these procedures are extremely hard to assess or compare. In this paper, we propose a model (simulation)-based approach which can be used for the systematical testing of all these dimensional reduction techniques. To illustrate the usefulness of our framework, we develop several toy models that display some of the main characteristic features of empirical correlations and generate artificial time series from them. Then, we regard these time series as empirical data and reconstruct the corresponding correlation matrices which will inevitably contain a certain amount of noise, due to the finiteness of the time series. Next, we apply several correlation matrix estimators and dimension reduction techniques introduced in the literature and/or applied in practice. As in our artificial world the only source of error is the finite length of the time series and, in addition, the “true” model, hence also the “true” correlation matrix, are precisely known, therefore in sharp contrast with empirical studies, we can precisely compare the performance of the various noise reduction techniques. One of our recurrent observations is that the recently introduced filtering technique based on random matrix theory performs consistently well in all the investigated cases. Based on this experience, we believe that our simulation-based approach can also be useful for the systematic investigation of several related problems of current interest in finance.

  20. Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J; Templeton, D C; Harris, D B

    The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combinedmore » with conventional methods significantly improves the network detection ability in an efficient matter.« less

  1. Income Smoothing: Methodology and Models.

    DTIC Science & Technology

    1986-05-01

    studies have all followed a similar research process (Figure 1). All were expost studies and included the following steps: 1. A smoothing technique(s) or...researcher methodological decisions used in past empirical studies of income smoothing (design type, smoothing device norm, and income target) are discussed...behavior. The identification of smoothing, and consequently the conclusions to be drawn from smoothing studies , is found to be sensitive to the three

  2. An empirical investigation of spatial differentiation and price floor regulations in retail markets for gasoline

    NASA Astrophysics Data System (ADS)

    Houde, Jean-Francois

    In the first essay of this dissertation, I study an empirical model of spatial competition. The main feature of my approach is to formally specify commuting paths as the "locations" of consumers in a Hotelling-type model of spatial competition. The main consequence of this location assumption is that the substitution patterns between stations depend in an intuitive way on the structure of the road network and the direction of traffic flows. The demand-side of the model is estimated by combining a model of traffic allocation with econometric techniques used to estimate models of demand for differentiated products (Berry, Levinsohn and Pakes (1995)). The estimated parameters are then used to evaluate the importance of commuting patterns in explaining the distribution of gasoline sales, and compare the economic predictions of the model with the standard home-location model. In the second and third essays, I examine empirically the effect of a price floor regulation on the dynamic and static equilibrium outcomes of the gasoline retail industry. In particular, in the second essay I study empirically the dynamic entry and exit decisions of gasoline stations, and measure the impact of a price floor on the continuation values of staying in the industry. In the third essay, I develop and estimate a static model of quantity competition subject to a price floor regulation. Both models are estimated using a rich panel dataset on the Quebec gasoline retail market before and after the implementation of a price floor regulation.

  3. Traditional Arabic & Islamic medicine: validation and empirical assessment of a conceptual model in Qatar.

    PubMed

    AlRawi, Sara N; Khidir, Amal; Elnashar, Maha S; Abdelrahim, Huda A; Killawi, Amal K; Hammoud, Maya M; Fetters, Michael D

    2017-03-14

    Evidence indicates traditional medicine is no longer only used for the healthcare of the poor, its prevalence is also increasing in countries where allopathic medicine is predominant in the healthcare system. While these healing practices have been utilized for thousands of years in the Arabian Gulf, only recently has a theoretical model been developed illustrating the linkages and components of such practices articulated as Traditional Arabic & Islamic Medicine (TAIM). Despite previous theoretical work presenting development of the TAIM model, empirical support has been lacking. The objective of this research is to provide empirical support for the TAIM model and illustrate real world applicability. Using an ethnographic approach, we recruited 84 individuals (43 women and 41 men) who were speakers of one of four common languages in Qatar; Arabic, English, Hindi, and Urdu, Through in-depth interviews, we sought confirming and disconfirming evidence of the model components, namely, health practices, beliefs and philosophy to treat, diagnose, and prevent illnesses and/or maintain well-being, as well as patterns of communication about their TAIM practices with their allopathic providers. Based on our analysis, we find empirical support for all elements of the TAIM model. Participants in this research, visitors to major healthcare centers, mentioned using all elements of the TAIM model: herbal medicines, spiritual therapies, dietary practices, mind-body methods, and manual techniques, applied singularly or in combination. Participants had varying levels of comfort sharing information about TAIM practices with allopathic practitioners. These findings confirm an empirical basis for the elements of the TAIM model. Three elements, namely, spiritual healing, herbal medicine, and dietary practices, were most commonly found. Future research should examine the prevalence of TAIM element use, how it differs among various populations, and its impact on health.

  4. STEAM: a software tool based on empirical analysis for micro electro mechanical systems

    NASA Astrophysics Data System (ADS)

    Devasia, Archana; Pasupuleti, Ajay; Sahin, Ferat

    2006-03-01

    In this research a generalized software framework that enables accurate computer aided design of MEMS devices is developed. The proposed simulation engine utilizes a novel material property estimation technique that generates effective material properties at the microscopic level. The material property models were developed based on empirical analysis and the behavior extraction of standard test structures. A literature review is provided on the physical phenomena that govern the mechanical behavior of thin films materials. This survey indicates that the present day models operate under a wide range of assumptions that may not be applicable to the micro-world. Thus, this methodology is foreseen to be an essential tool for MEMS designers as it would develop empirical models that relate the loading parameters, material properties, and the geometry of the microstructures with its performance characteristics. This process involves learning the relationship between the above parameters using non-parametric learning algorithms such as radial basis function networks and genetic algorithms. The proposed simulation engine has a graphical user interface (GUI) which is very adaptable, flexible, and transparent. The GUI is able to encompass all parameters associated with the determination of the desired material property so as to create models that provide an accurate estimation of the desired property. This technique was verified by fabricating and simulating bilayer cantilevers consisting of aluminum and glass (TEOS oxide) in our previous work. The results obtained were found to be very encouraging.

  5. Improving Marine Ecosystem Models with Biochemical Tracers

    NASA Astrophysics Data System (ADS)

    Pethybridge, Heidi R.; Choy, C. Anela; Polovina, Jeffrey J.; Fulton, Elizabeth A.

    2018-01-01

    Empirical data on food web dynamics and predator-prey interactions underpin ecosystem models, which are increasingly used to support strategic management of marine resources. These data have traditionally derived from stomach content analysis, but new and complementary forms of ecological data are increasingly available from biochemical tracer techniques. Extensive opportunities exist to improve the empirical robustness of ecosystem models through the incorporation of biochemical tracer data and derived indices, an area that is rapidly expanding because of advances in analytical developments and sophisticated statistical techniques. Here, we explore the trophic information required by ecosystem model frameworks (species, individual, and size based) and match them to the most commonly used biochemical tracers (bulk tissue and compound-specific stable isotopes, fatty acids, and trace elements). Key quantitative parameters derived from biochemical tracers include estimates of diet composition, niche width, and trophic position. Biochemical tracers also provide powerful insight into the spatial and temporal variability of food web structure and the characterization of dominant basal and microbial food web groups. A major challenge in incorporating biochemical tracer data into ecosystem models is scale and data type mismatches, which can be overcome with greater knowledge exchange and numerical approaches that transform, integrate, and visualize data.

  6. An Econometric Examination of the Behavioral Perspective Model in the Context of Norwegian Retailing

    ERIC Educational Resources Information Center

    Sigurdsson, Valdimar; Kahamseh, Saeed; Gunnarsson, Didrik; Larsen, Nils Magne; Foxall, Gordon R.

    2013-01-01

    The behavioral perspective model's (BPM; Foxall, 1990) retailing literature is built on extensive empirical research and techniques that were originally refined in choice experiments in behavioral economics and behavior analysis, and then tested mostly on British consumer panel data. We test the BPM in the context of Norwegian retailing. This…

  7. Computation of nonlinear least squares estimator and maximum likelihood using principles in matrix calculus

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.

    2017-11-01

    This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation

  8. Empirical analysis of storm-time energetic electron enhancements

    NASA Astrophysics Data System (ADS)

    O'Brien, Thomas Paul, III

    This Ph.D. thesis documents a program for studying the appearance of energetic electrons in the Earth's outer radiation belts that is associated with many geomagnetic storms. The dynamic evolution of the electron radiation belts is an outstanding empirical problem in both theoretical space physics and its applied sibling, space weather. The project emphasizes the development of empirical tools and their use in testing several theoretical models of the energization of the electron belts. First, I develop the Statistical Asynchronous Regression technique to provide proxy electron fluxes throughout the parts of the radiation belts explored by geosynchronous and GPS spacecraft. Next, I show that a theoretical adiabatic model can relate the local time asymmetry of the proxy geosynchronous fluxes to the asymmetry of the geomagnetic field. Then, I perform a superposed epoch analysis on the proxy fluxes at local noon to identify magnetospheric and interplanetary precursors of relativistic electron enhancements. Finally, I use statistical and neural network phase space analyses to determine the hourly evolution of flux at a virtual stationary monitor. The dynamic equation quantitatively identifies the importance of different drivers of the electron belts. This project provides empirical constraints on theoretical models of electron acceleration.

  9. Numerical methods for assessing water quality in lakes and reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahamah, D.S.

    1984-01-01

    Water quality models are used as tools for predicting both short-term and long-term trends in water quality. They are generally classified into two groups based on the degree of empiricism. The two groups consists of the purely empirical types known as black-box models and the theoretical types called ecosystem models. This dissertation deals with both types of water quality models. The first part deals with empirical phosphorus models. The theory behind this class of models is discussed, leading to the development of an empirical phosphorus model using data from 79 western US lakes. A new approach to trophic state classificationmore » is introduced. The data used for the model was obtained from the Environmental Protection Agency National Eutrophication Study (EPA-NES) of western US lakes. The second portion of the dissertation discusses the development of an ecosystem model for culturally eutrophic Liberty Lake situated in eastern Washington State. The model is capable of simulating chlorophyll-a, phosphorus, and nitrogen levels in the lake on a weekly basis. For computing sediment release rates of phosphorus and nitrogen, equations based on laboratory bench-top studies using sediment samples from Liberty Lake are used. The model is used to simulate certain hypothetical nutrient control techniques such as phosphorus flushing, precipitation, and diversion.« less

  10. Empirical Corrections to Nutation Amplitudes and Precession Computed from a Global VLBI Solution

    NASA Astrophysics Data System (ADS)

    Schuh, H.; Ferrandiz, J. M.; Belda-Palazón, S.; Heinkelmann, R.; Karbon, M.; Nilsson, T.

    2017-12-01

    The IAU2000A nutation and IAU2006 precession models were adopted to provide accurate estimations and predictions of the Celestial Intermediate Pole (CIP). However, they are not fully accurate and VLBI (Very Long Baseline Interferometry) observations show that the CIP deviates from the position resulting from the application of the IAU2006/2000A model. Currently, those deviations or offsets of the CIP (Celestial Pole Offsets - CPO), can only be obtained by the VLBI technique. The accuracy of the order of 0.1 milliseconds of arc (mas) allows to compare the observed nutation with theoretical prediction model for a rigid Earth and constrain geophysical parameters describing the Earth's interior. In this study, we empirically evaluate the consistency, systematics and deviations of the IAU 2006/2000A precession-nutation model using several CPO time series derived from the global analysis of VLBI sessions. The final objective is the reassessment of the precession offset and rate, and the amplitudes of the principal terms of nutation, trying to empirically improve the conventional values derived from the precession/nutation theories. The statistical analysis of the residuals after re-fitting the main nutation terms demonstrates that our empirical corrections attain an error reduction by almost 15 micro arc seconds.

  11. Creating A 3D urban model by terrestrial laser scanners and photogrammetry techniques: a case study on the historical peninsula of Istanbul

    NASA Astrophysics Data System (ADS)

    Ergun, Bahadir

    2007-07-01

    Today, terrestrial laser scanning has been a frequently used methodology for the documentation of historical buildings and cultural heritages. The historical peninsula region is the documentation of historical buildings and cover approximately 1500 ha. Terrestrial laser scanning and close range image photogrammetry techniques are integrated to each other to create a 3D urban model of Istanbul including the most important landmarks and the buildings reflecting the most brilliant areas of Byzantine and Ottoman Empires.

  12. A model of therapist competencies for the empirically supported interpersonal psychotherapy for adolescent depression.

    PubMed

    Sburlati, Elizabeth S; Lyneham, Heidi J; Mufson, Laura H; Schniering, Carolyn A

    2012-06-01

    In order to treat adolescent depression, a number of empirically supported treatments (ESTs) have been developed from both the cognitive behavioral therapy (CBT) and interpersonal psychotherapy (IPT-A) frameworks. Research has shown that in order for these treatments to be implemented in routine clinical practice (RCP), effective therapist training must be generated and provided. However, before such training can be developed, a good understanding of the therapist competencies needed to implement these ESTs is required. Sburlati et al. (Clin Child Fam Psychol Rev 14:89-109, 2011) developed a model of therapist competencies for implementing CBT using the well-established Delphi technique. Given that IPT-A differs considerably to CBT, the current study aims to develop a model of therapist competencies for the implementation of IPT-A using a similar procedure as that applied in Sburlati et al. (Clin Child Fam Psychol Rev 14:89-109, 2011). This method involved: (1) identifying and reviewing an empirically supported IPT-A approach, (2) extracting therapist competencies required for the implementation of IPT-A, (3) consulting with a panel of IPT-A experts to generate an overall model of therapist competencies, and (4) validating the overall model with the IPT-A manual author. The resultant model offers an empirically derived set of competencies necessary for effectively treating adolescent depression using IPT-A and has wide implications for the development of therapist training, competence assessment measures, and evidence-based practice guidelines. This model, therefore, provides an empirical framework for the development of dissemination and implementation programs aimed at ensuring that adolescents with depression receive effective care in RCP settings. Key similarities and differences between CBT and IPT-A, and the therapist competencies required for implementing these treatments, are also highlighted throughout this article.

  13. A robust empirical seasonal prediction of winter NAO and surface climate.

    PubMed

    Wang, L; Ting, M; Kushner, P J

    2017-03-21

    A key determinant of winter weather and climate in Europe and North America is the North Atlantic Oscillation (NAO), the dominant mode of atmospheric variability in the Atlantic domain. Skilful seasonal forecasting of the surface climate in both Europe and North America is reflected largely in how accurately models can predict the NAO. Most dynamical models, however, have limited skill in seasonal forecasts of the winter NAO. A new empirical model is proposed for the seasonal forecast of the winter NAO that exhibits higher skill than current dynamical models. The empirical model provides robust and skilful prediction of the December-January-February (DJF) mean NAO index using a multiple linear regression (MLR) technique with autumn conditions of sea-ice concentration, stratospheric circulation, and sea-surface temperature. The predictability is, for the most part, derived from the relatively long persistence of sea ice in the autumn. The lower stratospheric circulation and sea-surface temperature appear to play more indirect roles through a series of feedbacks among systems driving NAO evolution. This MLR model also provides skilful seasonal outlooks of winter surface temperature and precipitation over many regions of Eurasia and eastern North America.

  14. Improved Ionospheric Electrodynamic Models and Application to Calculating Joule Heating Rates

    NASA Technical Reports Server (NTRS)

    Weimer, D. R.

    2004-01-01

    Improved techniques have been developed for empirical modeling of the high-latitude electric potentials and magnetic field aligned currents (FAC) as a function of the solar wind parameters. The FAC model is constructed using scalar magnetic Euler potentials, and functions as a twin to the electric potential model. The improved models have more accurate field values as well as more accurate boundary locations. Non-linear saturation effects in the solar wind-magnetosphere coupling are also better reproduced. The models are constructed using a hybrid technique, which has spherical harmonic functions only within a small area at the pole. At lower latitudes the potentials are constructed from multiple Fourier series functions of longitude, at discrete latitudinal steps. It is shown that the two models can be used together in order to calculate the total Poynting flux and Joule heating in the ionosphere. An additional model of the ionospheric conductivity is not required in order to obtain the ionospheric currents and Joule heating, as the conductivity variations as a function of the solar inclination are implicitly contained within the FAC model's data. The models outputs are shown for various input conditions, as well as compared with satellite measurements. The calculations of the total Joule heating are compared with results obtained by the inversion of ground-based magnetometer measurements. Like their predecessors, these empirical models should continue to be a useful research and forecast tools.

  15. Ozone data and mission sampling analysis

    NASA Technical Reports Server (NTRS)

    Robbins, J. L.

    1980-01-01

    A methodology was developed to analyze discrete data obtained from the global distribution of ozone. Statistical analysis techniques were applied to describe the distribution of data variance in terms of empirical orthogonal functions and components of spherical harmonic models. The effects of uneven data distribution and missing data were considered. Data fill based on the autocorrelation structure of the data is described. Computer coding of the analysis techniques is included.

  16. Performance Monitoring Of A Computer Numerically Controlled (CNC) Lathe Using Pattern Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Daneshmend, L. K.; Pak, H. A.

    1984-02-01

    On-line monitoring of the cutting process in CNC lathe is desirable to ensure unattended fault-free operation in an automated environment. The state of the cutting tool is one of the most important parameters which characterises the cutting process. Direct monitoring of the cutting tool or workpiece is not feasible during machining. However several variables related to the state of the tool can be measured on-line. A novel monitoring technique is presented which uses cutting torque as the variable for on-line monitoring. A classifier is designed on the basis of the empirical relationship between cutting torque and flank wear. The empirical model required by the on-line classifier is established during an automated training cycle using machine vision for off-line direct inspection of the tool.

  17. Intercomparison of Meteorological Forcing Data from Empirical and Mesoscale Model Sources in the N.F. American River Basin in northern California

    NASA Astrophysics Data System (ADS)

    Wayand, N. E.; Hamlet, A. F.; Hughes, M. R.; Feld, S.; Lundquist, J. D.

    2012-12-01

    The data required to drive distributed hydrological models is significantly limited within mountainous terrain due to a scarcity of observations. This study evaluated three common configurations of forcing data: a) one low-elevation station, combined with empirical techniques, b) gridded output from the Weather Research and Forecasting (WRF) model, and c) a combination of the two. Each configuration was evaluated within the heavily-instrumented North Fork American River Basin in northern California, during October-June 2000-2010. Simulations of streamflow and snowpack using the Distributed Hydrology Soil and Vegetation Model (DHSVM) highlighted precipitation and radiation as variables whose sources resulted in significant differences. The best source of precipitation data varied between years. On average, the performance of WRF and the single station distributed using the Parameter Regression on Independent Slopes Model (PRISM), were not significantly different. The average percent biases in simulated streamflow were 3.4% and 0.9%, for configurations a) and b) respectively, even though precipitation compared directly with gauge measurements was biased high by 6% and 17%, suggesting that gauge undercatch may explain part of the bias. Simulations of snowpack using empirically-estimated long-wave irradiance resulted in melt rates lower than those observed at high-elevation sites, while at lower-elevations the same forcing caused significant mid-winter melt that was not observed (Figure 1). These results highlight the complexity of how forcing data sources impact hydrology over different areas (high vs. low elevation snow) and different time-periods. Overall, results support the use of output from the WRF model over empirical techniques in regions with limited station data. FIG. 1. (a,b) Simulated SWE from DHSVM compared to observations at the Sierra Snow Lab (2100m) and Blue Canyon (1609m) during 2008 - 2009. Modeled (c,d) internal pack temperature, (e,f) downward short-wave irradiance, (g,h) downward long-wave irradiance, and (i,k) net-irradiance. Note that the timeperiod of plots e,g,i focus on the melt season (March-May), and plots f,h,j focus on the erroneous mid-winter melt event during January - time-periods marked with vertical dashed lines in (a) and (b).

  18. Machine learning strategies for systems with invariance properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan

    Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less

  19. Machine learning strategies for systems with invariance properties

    DOE PAGES

    Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan

    2016-05-06

    Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less

  20. A framework for studying transient dynamics of population projection matrix models.

    PubMed

    Stott, Iain; Townley, Stuart; Hodgson, David James

    2011-09-01

    Empirical models are central to effective conservation and population management, and should be predictive of real-world dynamics. Available modelling methods are diverse, but analysis usually focuses on long-term dynamics that are unable to describe the complicated short-term time series that can arise even from simple models following ecological disturbances or perturbations. Recent interest in such transient dynamics has led to diverse methodologies for their quantification in density-independent, time-invariant population projection matrix (PPM) models, but the fragmented nature of this literature has stifled the widespread analysis of transients. We review the literature on transient analyses of linear PPM models and synthesise a coherent framework. We promote the use of standardised indices, and categorise indices according to their focus on either convergence times or transient population density, and on either transient bounds or case-specific transient dynamics. We use a large database of empirical PPM models to explore relationships between indices of transient dynamics. This analysis promotes the use of population inertia as a simple, versatile and informative predictor of transient population density, but criticises the utility of established indices of convergence times. Our findings should guide further development of analyses of transient population dynamics using PPMs or other empirical modelling techniques. © 2011 Blackwell Publishing Ltd/CNRS.

  1. A novel hybrid ensemble learning paradigm for tourism forecasting

    NASA Astrophysics Data System (ADS)

    Shabri, Ani

    2015-02-01

    In this paper, a hybrid forecasting model based on Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) is proposed to forecast tourism demand. This methodology first decomposes the original visitor arrival series into several Intrinsic Model Function (IMFs) components and one residual component by EMD technique. Then, IMFs components and the residual components is forecasted respectively using GMDH model whose input variables are selected by using Partial Autocorrelation Function (PACF). The final forecasted result for tourism series is produced by aggregating all the forecasted results. For evaluating the performance of the proposed EMD-GMDH methodologies, the monthly data of tourist arrivals from Singapore to Malaysia are used as an illustrative example. Empirical results show that the proposed EMD-GMDH model outperforms the EMD-ARIMA as well as the GMDH and ARIMA (Autoregressive Integrated Moving Average) models without time series decomposition.

  2. An accurate European option pricing model under Fractional Stable Process based on Feynman Path Integral

    NASA Astrophysics Data System (ADS)

    Ma, Chao; Ma, Qinghua; Yao, Haixiang; Hou, Tiancheng

    2018-03-01

    In this paper, we propose to use the Fractional Stable Process (FSP) for option pricing. The FSP is one of the few candidates to directly model a number of desired empirical properties of asset price risk neutral dynamics. However, pricing the vanilla European option under FSP is difficult and problematic. In the paper, built upon the developed Feynman Path Integral inspired techniques, we present a novel computational model for option pricing, i.e. the Fractional Stable Process Path Integral (FSPPI) model under a general fractional stable distribution that tackles this problem. Numerical and empirical experiments show that the proposed pricing model provides a correction of the Black-Scholes pricing error - overpricing long term options, underpricing short term options; overpricing out-of-the-money options, underpricing in-the-money options without any additional structures such as stochastic volatility and a jump process.

  3. Training young scientists across empirical and modeling approaches

    NASA Astrophysics Data System (ADS)

    Moore, D. J.

    2014-12-01

    The "fluxcourse," is a two-week program of study on Flux Measurements and Advanced Modeling (www.fluxcourse.org). Since 2007, this course has trained early career scientists to use both empirical observations and models to tackle terrestrial ecological questions. The fluxcourse seeks to cross train young scientists in measurement techniques and advanced modeling approaches for quantifying carbon and water fluxes between the atmosphere and the biosphere. We invited between ten and twenty volunteer instructors depending on the year ranging in experience and expertise, including representatives from industry, university professors and research specialists. The course combines online learning, lecture and discussion with hands on activities that range from measuring photosynthesis and installing an eddy covariance system to wrangling data and carrying out modeling experiments. Attendees are asked to develop and present two different group projects throughout the course. The overall goal is provide the next generation of scientists with the tools to tackle complex problems that require collaboration.

  4. Use of Empirical Estimates of Shrinkage in Multiple Regression: A Caution.

    ERIC Educational Resources Information Center

    Kromrey, Jeffrey D.; Hines, Constance V.

    1995-01-01

    The accuracy of four empirical techniques to estimate shrinkage in multiple regression was studied through Monte Carlo simulation. None of the techniques provided unbiased estimates of the population squared multiple correlation coefficient, but the normalized jackknife and bootstrap techniques demonstrated marginally acceptable performance with…

  5. Man-machine analysis of translation and work tasks of Skylab films

    NASA Technical Reports Server (NTRS)

    Hosler, W. W.; Boelter, J. G.; Morrow, J. R., Jr.; Jackson, J. T.

    1979-01-01

    An objective approach to determine the concurrent validity of computer-graphic models is real time film analysis. This technique was illustrated through the procedures and results obtained in an evaluation of translation of Skylab mission astronauts. The quantitative analysis was facilitated by the use of an electronic film analyzer, minicomputer, and specifically supportive software. The uses of this technique for human factors research are: (1) validation of theoretical operator models; (2) biokinetic analysis; (3) objective data evaluation; (4) dynamic anthropometry; (5) empirical time-line analysis; and (6) consideration of human variability. Computer assisted techniques for interface design and evaluation have the potential for improving the capability for human factors engineering.

  6. Modeling Spanish Mood Choice in Belief Statements

    ERIC Educational Resources Information Center

    Robinson, Jason R.

    2013-01-01

    This work develops a computational methodology new to linguistics that empirically evaluates competing linguistic theories on Spanish verbal mood choice through the use of computational techniques to learn mood and other hidden linguistic features from Spanish belief statements found in corpora. The machine learned probabilistic linguistic models…

  7. Economic analysis of secondary and enhanced oil recovery techniques in Wyoming

    NASA Astrophysics Data System (ADS)

    Kara, Erdal

    This dissertation primarily aims to theoretically analyze a firm's optimization of enhanced oil recovery (EOR) and carbon dioxide sequestration under different social policies and empirically analyze the firm's optimization of enhanced oil recovery. The final part of the dissertation empirically analyzes how geological factors and water injection management influence oil recovery. The first chapter builds a theoretical model to analyze economic optimization of EOR and geological carbon sequestration under different social policies. Specifically, it analyzes how social policies on sequestration influence the extent of oil operations, optimal oil production and CO2 sequestration. The theoretical results show that the socially optimal policy is a subsidy on the net CO2 sequestration, assuming negative net emissions from EOR. Such a policy is expected to increase a firm's total carbon dioxide sequestration. The second chapter statistically estimates the theoretical oil production model and its different versions. Empirical results are not robust over different estimation techniques and not in line with the theoretical production model. The last part of the second chapter utilizes a simplified version of theoretical model and concludes that EOR via CO2 injection improves oil recovery. The final chapter analyzes how a contemporary oil recovery technology (water flooding of oil reservoirs) and various reservoir-specific geological factors influence oil recovery in Wyoming. The results show that there is a positive concave relationship between cumulative water injection and cumulative oil recovery and also show that certain geological factors affect the oil recovery. Moreover, the curvature of the concave functional relationship between cumulative water injection and oil recovery is reservoir-specific due to heterogeneities among different reservoirs.

  8. VMF3/GPT3: refined discrete and empirical troposphere mapping functions

    NASA Astrophysics Data System (ADS)

    Landskron, Daniel; Böhm, Johannes

    2018-04-01

    Incorrect modeling of troposphere delays is one of the major error sources for space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). Over the years, many approaches have been devised which aim at mapping the delay of radio waves from zenith direction down to the observed elevation angle, so-called mapping functions. This paper contains a new approach intended to refine the currently most important discrete mapping function, the Vienna Mapping Functions 1 (VMF1), which is successively referred to as Vienna Mapping Functions 3 (VMF3). It is designed in such a way as to eliminate shortcomings in the empirical coefficients b and c and in the tuning for the specific elevation angle of 3°. Ray-traced delays of the ray-tracer RADIATE serve as the basis for the calculation of new mapping function coefficients. Comparisons of modeled slant delays demonstrate the ability of VMF3 to approximate the underlying ray-traced delays more accurately than VMF1 does, in particular at low elevation angles. In other words, when requiring highest precision, VMF3 is to be preferable to VMF1. Aside from revising the discrete form of mapping functions, we also present a new empirical model named Global Pressure and Temperature 3 (GPT3) on a 5°× 5° as well as a 1°× 1° global grid, which is generally based on the same data. Its main components are hydrostatic and wet empirical mapping function coefficients derived from special averaging techniques of the respective (discrete) VMF3 data. In addition, GPT3 also contains a set of meteorological quantities which are adopted as they stand from their predecessor, Global Pressure and Temperature 2 wet. Thus, GPT3 represents a very comprehensive troposphere model which can be used for a series of geodetic as well as meteorological and climatological purposes and is fully consistent with VMF3.

  9. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  10. Sensor Data Qualification Technique Applied to Gas Turbine Engines

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Simon, Donald L.

    2013-01-01

    This paper applies a previously developed sensor data qualification technique to a commercial aircraft engine simulation known as the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k). The sensor data qualification technique is designed to detect, isolate, and accommodate faulty sensor measurements. It features sensor networks, which group various sensors together and relies on an empirically derived analytical model to relate the sensor measurements. Relationships between all member sensors of the network are analyzed to detect and isolate any faulty sensor within the network.

  11. Optical remote sensing and correlation of office equipment functional state and stress levels via power quality disturbances inefficiencies

    NASA Astrophysics Data System (ADS)

    Sternberg, Oren; Bednarski, Valerie R.; Perez, Israel; Wheeland, Sara; Rockway, John D.

    2016-09-01

    Non-invasive optical techniques pertaining to the remote sensing of power quality disturbances (PQD) are part of an emerging technology field typically dominated by radio frequency (RF) and invasive-based techniques. Algorithms and methods to analyze and address PQD such as probabilistic neural networks and fully informed particle swarms have been explored in industry and academia. Such methods are tuned to work with RF equipment and electronics in existing power grids. As both commercial and defense assets are heavily power-dependent, understanding electrical transients and failure events using non-invasive detection techniques is crucial. In this paper we correlate power quality empirical models to the observed optical response. We also empirically demonstrate a first-order approach to map household, office and commercial equipment PQD to user functions and stress levels. We employ a physics-based image and signal processing approach, which demonstrates measured non-invasive (remote sensing) techniques to detect and map the base frequency associated with the power source to the various PQD on a calibrated source.

  12. The effects of missing data on global ozone estimates

    NASA Technical Reports Server (NTRS)

    Drewry, J. W.; Robbins, J. L.

    1981-01-01

    The effects of missing data and model truncation on estimates of the global mean, zonal distribution, and global distribution of ozone are considered. It is shown that missing data can introduce biased estimates with errors that are not accounted for in the accuracy calculations of empirical modeling techniques. Data-fill techniques are introduced and used for evaluating error bounds and constraining the estimate in areas of sparse and missing data. It is found that the accuracy of the global mean estimate is more dependent on data distribution than model size. Zonal features can be accurately described by 7th order models over regions of adequate data distribution. Data variance accounted for by higher order models appears to represent climatological features of columnar ozone rather than pure error. Data-fill techniques can prevent artificial feature generation in regions of sparse or missing data without degrading high order estimates over dense data regions.

  13. Survey of current situation in radiation belt modeling

    NASA Technical Reports Server (NTRS)

    Fung, Shing F.

    2004-01-01

    The study of Earth's radiation belts is one of the oldest subjects in space physics. Despite the tremendous progress made in the last four decades, we still lack a complete understanding of the radiation belts in terms of their configurations, dynamics, and detailed physical accounts of their sources and sinks. The static nature of early empirical trapped radiation models, for examples, the NASA AP-8 and AE-8 models, renders those models inappropriate for predicting short-term radiation belt behaviors associated with geomagnetic storms and substorms. Due to incomplete data coverage, these models are also inaccurate at low altitudes (e.g., <1000 km) where many robotic and human space flights occur. The availability of radiation data from modern space missions and advancement in physical modeling and data management techniques have now allowed the development of new empirical and physical radiation belt models. In this paper, we will review the status of modern radiation belt modeling. Published by Elsevier Ltd on behalf of COSPAR.

  14. A novel hybrid model for air quality index forecasting based on two-phase decomposition technique and modified extreme learning machine.

    PubMed

    Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier

    2017-02-15

    The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. A study of methods to predict and measure the transmission of sound through the walls of light aircraft

    NASA Technical Reports Server (NTRS)

    Bernhard, R. J.; Bolton, J. S.

    1988-01-01

    The objectives are: measurement of dynamic properties of acoustical foams and incorporation of these properties in models governing three-dimensional wave propagation in foams; tests to measure sound transmission paths in the HP137 Jetstream 3; and formulation of a finite element energy model. In addition, the effort to develop a numerical/empirical noise source identification technique was completed. The investigation of a design optimization technique for active noise control was also completed. Monthly progress reports which detail the progress made toward each of the objectives are summarized.

  16. Prediction of pressure drop in fluid tuned mounts using analytical and computational techniques

    NASA Technical Reports Server (NTRS)

    Lasher, William C.; Khalilollahi, Amir; Mischler, John; Uhric, Tom

    1993-01-01

    A simplified model for predicting pressure drop in fluid tuned isolator mounts was developed. The model is based on an exact solution to the Navier-Stokes equations and was made more general through the use of empirical coefficients. The values of these coefficients were determined by numerical simulation of the flow using the commercial computational fluid dynamics (CFD) package FIDAP.

  17. DYNARIP: A technique for regional forest inventory projection and policy analysis

    Treesearch

    William A. Bechtold

    1984-01-01

    DYNARIP is a policy-oriented model capable of tracking all of the treatments and disturbances experienced by the forest resources of an entire State or regional area. It can also isolate the impact of any one of the 27 man-caused or natural disturbances (including natural succession and forest land-base changes). The model is driven by empirical rates of change as...

  18. GIS-based analysis and modelling with empirical and remotely-sensed data on coastline advance and retreat

    NASA Astrophysics Data System (ADS)

    Ahmad, Sajid Rashid

    With the understanding that far more research remains to be done on the development and use of innovative and functional geospatial techniques and procedures to investigate coastline changes this thesis focussed on the integration of remote sensing, geographical information systems (GIS) and modelling techniques to provide meaningful insights on the spatial and temporal dynamics of coastline changes. One of the unique strengths of this research was the parameterization of the GIS with long-term empirical and remote sensing data. Annual empirical data from 1941--2007 were analyzed by the GIS, and then modelled with statistical techniques. Data were also extracted from Landsat TM and ETM+ images. The band ratio method was used to extract the coastlines. Topographic maps were also used to extract digital map data. All data incorporated into ArcGIS 9.2 were analyzed with various modules, including Spatial Analyst, 3D Analyst, and Triangulated Irregular Networks. The Digital Shoreline Analysis System was used to analyze and predict rates of coastline change. GIS results showed the spatial locations along the coast that will either advance or retreat over time. The linear regression results highlighted temporal changes which are likely to occur along the coastline. Box-Jenkins modelling procedures were utilized to determine statistical models which best described the time series (1941--2007) of coastline change data. After several iterations and goodness-of-fit tests, second-order spatial cyclic autoregressive models, first-order autoregressive models and autoregressive moving average models were identified as being appropriate for describing the deterministic and random processes operating in Guyana's coastal system. The models highlighted not only cyclical patterns in advance and retreat of the coastline, but also the existence of short and long-term memory processes. Long-term memory processes could be associated with mudshoal propagation and stabilization while short-term memory processes were indicative of transitory hydrodynamic and other processes. An innovative framework for a spatio-temporal information-based system (STIBS) was developed. STIBS incorporated diverse datasets within a GIS, dynamic computer-based simulation models, and a spatial information query and graphical subsystem. Tests of the STIBS proved that it could be used to simulate and visualize temporal variability in shifting morphological states of the coastline.

  19. Reflectance spectroscopy: quantitative analysis techniques for remote sensing applications.

    USGS Publications Warehouse

    Clark, R.N.; Roush, T.L.

    1984-01-01

    Several methods for the analysis of remotely sensed reflectance data are compared, including empirical methods and scattering theories, both of which are important for solving remote sensing problems. The concept of the photon mean path length and the implications for use in modeling reflectance spectra are presented.-from Authors

  20. Optimization Techniques for College Financial Aid Managers

    ERIC Educational Resources Information Center

    Bosshardt, Donald I.; Lichtenstein, Larry; Palumbo, George; Zaporowski, Mark P.

    2010-01-01

    In the context of a theoretical model of expected profit maximization, this paper shows how historic institutional data can be used to assist enrollment managers in determining the level of financial aid for students with varying demographic and quality characteristics. Optimal tuition pricing in conjunction with empirical estimation of…

  1. Computational techniques in tribology and material science at the atomic level

    NASA Technical Reports Server (NTRS)

    Ferrante, J.; Bozzolo, G. H.

    1992-01-01

    Computations in tribology and material science at the atomic level present considerable difficulties. Computational techniques ranging from first-principles to semi-empirical and their limitations are discussed. Example calculations of metallic surface energies using semi-empirical techniques are presented. Finally, application of the methods to calculation of adhesion and friction are presented.

  2. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  3. Detection and localization of change points in temporal networks with the aid of stochastic block models

    NASA Astrophysics Data System (ADS)

    De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan

    2016-11-01

    A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.

  4. Rapid correction of electron microprobe data for multicomponent metallic systems

    NASA Technical Reports Server (NTRS)

    Gupta, K. P.; Sivakumar, R.

    1973-01-01

    This paper describes an empirical relation for the correction of electron microprobe data for multicomponent metallic systems. It evaluates the empirical correction parameter, a for each element in a binary alloy system using a modification of Colby's MAGIC III computer program and outlines a simple and quick way of correcting the probe data. This technique has been tested on a number of multicomponent metallic systems and the agreement with the results using theoretical expressions is found to be excellent. Limitations and suitability of this relation are discussed and a model calculation is also presented in the Appendix.

  5. Multispectral system analysis through modeling and simulation

    NASA Technical Reports Server (NTRS)

    Malila, W. A.; Gleason, J. M.; Cicone, R. C.

    1977-01-01

    The design and development of multispectral remote sensor systems and associated information extraction techniques should be optimized under the physical and economic constraints encountered and yet be effective over a wide range of scene and environmental conditions. Direct measurement of the full range of conditions to be encountered can be difficult, time consuming, and costly. Simulation of multispectral data by modeling scene, atmosphere, sensor, and data classifier characteristics is set forth as a viable alternative, particularly when coupled with limited sets of empirical measurements. A multispectral system modeling capability is described. Use of the model is illustrated for several applications - interpretation of remotely sensed data from agricultural and forest scenes, evaluating atmospheric effects in Landsat data, examining system design and operational configuration, and development of information extraction techniques.

  6. Multispectral system analysis through modeling and simulation

    NASA Technical Reports Server (NTRS)

    Malila, W. A.; Gleason, J. M.; Cicone, R. C.

    1977-01-01

    The design and development of multispectral remote sensor systems and associated information extraction techniques should be optimized under the physical and economic constraints encountered and yet be effective over a wide range of scene and environmental conditions. Direct measurement of the full range of conditions to be encountered can be difficult, time consuming, and costly. Simulation of multispectral data by modeling scene, atmosphere, sensor, and data classifier characteristics is set forth as a viable alternative, particularly when coupled with limited sets of empirical measurements. A multispectral system modeling capability is described. Use of the model is illustrated for several applications - interpretation of remotely sensed data from agricultural and forest scenes, evaluating atmospheric effects in LANDSAT data, examining system design and operational configuration, and development of information extraction techniques.

  7. Empirical radio propagation model for DTV applied to non-homogeneous paths and different climates using machine learning techniques.

    PubMed

    Gomes, Igor Ruiz; Gomes, Cristiane Ruiz; Gomes, Herminio Simões; Cavalcante, Gervásio Protásio Dos Santos

    2018-01-01

    The establishment and improvement of transmission systems rely on models that take into account, (among other factors), the geographical features of the region, as these can lead to signal degradation. This is particularly important in Brazil, where there is a great diversity of scenery and climates. This article proposes an outdoor empirical radio propagation model for Ultra High Frequency (UHF) band, that estimates received power values that can be applied to non-homogeneous paths and different climates, this last being of an innovative character for the UHF band. Different artificial intelligence techniques were chosen on a theoretical and computational basis and made it possible to introduce, organize and describe quantitative and qualitative data quickly and efficiently, and thus determine the received power in a wide range of settings and climates. The proposed model was applied to a city in the Amazon region with heterogeneous paths, wooded urban areas and fractions of freshwater among other factors. Measurement campaigns were conducted to obtain data signals from two digital TV stations in the metropolitan area of the city of Belém, in the State of Pará, to design, compare and validate the model. The results are consistent since the model shows a clear difference between the two seasons of the studied year and small RMS errors in all the cases studied.

  8. Empirical radio propagation model for DTV applied to non-homogeneous paths and different climates using machine learning techniques

    PubMed Central

    Gomes, Herminio Simões; Cavalcante, Gervásio Protásio dos Santos

    2018-01-01

    The establishment and improvement of transmission systems rely on models that take into account, (among other factors), the geographical features of the region, as these can lead to signal degradation. This is particularly important in Brazil, where there is a great diversity of scenery and climates. This article proposes an outdoor empirical radio propagation model for Ultra High Frequency (UHF) band, that estimates received power values that can be applied to non-homogeneous paths and different climates, this last being of an innovative character for the UHF band. Different artificial intelligence techniques were chosen on a theoretical and computational basis and made it possible to introduce, organize and describe quantitative and qualitative data quickly and efficiently, and thus determine the received power in a wide range of settings and climates. The proposed model was applied to a city in the Amazon region with heterogeneous paths, wooded urban areas and fractions of freshwater among other factors. Measurement campaigns were conducted to obtain data signals from two digital TV stations in the metropolitan area of the city of Belém, in the State of Pará, to design, compare and validate the model. The results are consistent since the model shows a clear difference between the two seasons of the studied year and small RMS errors in all the cases studied. PMID:29596503

  9. Refined discrete and empirical horizontal gradients in VLBI analysis

    NASA Astrophysics Data System (ADS)

    Landskron, Daniel; Böhm, Johannes

    2018-02-01

    Missing or incorrect consideration of azimuthal asymmetry of troposphere delays is a considerable error source in space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). So-called horizontal troposphere gradients are generally utilized for modeling such azimuthal variations and are particularly required for observations at low elevation angles. Apart from estimating the gradients within the data analysis, which has become common practice in space geodetic techniques, there is also the possibility to determine the gradients beforehand from different data sources than the actual observations. Using ray-tracing through Numerical Weather Models (NWMs), we determined discrete gradient values referred to as GRAD for VLBI observations, based on the standard gradient model by Chen and Herring (J Geophys Res 102(B9):20489-20502, 1997. https://doi.org/10.1029/97JB01739) and also for new, higher-order gradient models. These gradients are produced on the same data basis as the Vienna Mapping Functions 3 (VMF3) (Landskron and Böhm in J Geod, 2017.https://doi.org/10.1007/s00190-017-1066-2), so they can also be regarded as the VMF3 gradients as they are fully consistent with each other. From VLBI analyses of the Vienna VLBI and Satellite Software (VieVS), it becomes evident that baseline length repeatabilities (BLRs) are improved on average by 5% when using a priori gradients GRAD instead of estimating the gradients. The reason for this improvement is that the gradient estimation yields poor results for VLBI sessions with a small number of observations, while the GRAD a priori gradients are unaffected from this. We also developed a new empirical gradient model applicable for any time and location on Earth, which is included in the Global Pressure and Temperature 3 (GPT3) model. Although being able to describe only the systematic component of azimuthal asymmetry and no short-term variations at all, even these empirical a priori gradients slightly reduce (improve) the BLRs with respect to the estimation of gradients. In general, this paper addresses that a priori horizontal gradients are actually more important for VLBI analysis than previously assumed, as particularly the discrete model GRAD as well as the empirical model GPT3 are indeed able to refine and improve the results.

  10. Regime switching model for financial data: Empirical risk analysis

    NASA Astrophysics Data System (ADS)

    Salhi, Khaled; Deaconu, Madalina; Lejay, Antoine; Champagnat, Nicolas; Navet, Nicolas

    2016-11-01

    This paper constructs a regime switching model for the univariate Value-at-Risk estimation. Extreme value theory (EVT) and hidden Markov models (HMM) are combined to estimate a hybrid model that takes volatility clustering into account. In the first stage, HMM is used to classify data in crisis and steady periods, while in the second stage, EVT is applied to the previously classified data to rub out the delay between regime switching and their detection. This new model is applied to prices of numerous stocks exchanged on NYSE Euronext Paris over the period 2001-2011. We focus on daily returns for which calibration has to be done on a small dataset. The relative performance of the regime switching model is benchmarked against other well-known modeling techniques, such as stable, power laws and GARCH models. The empirical results show that the regime switching model increases predictive performance of financial forecasting according to the number of violations and tail-loss tests. This suggests that the regime switching model is a robust forecasting variant of power laws model while remaining practical to implement the VaR measurement.

  11. Boundary methods for mode estimation

    NASA Astrophysics Data System (ADS)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  12. GRAM-86 - FOUR DIMENSIONAL GLOBAL REFERENCE ATMOSPHERE MODEL

    NASA Technical Reports Server (NTRS)

    Johnson, D.

    1994-01-01

    The Four-D Global Reference Atmosphere program was developed from an empirical atmospheric model which generates values for pressure, density, temperature, and winds from surface level to orbital altitudes. This program can be used to generate altitude profiles of atmospheric parameters along any simulated trajectory through the atmosphere. The program was developed for design applications in the Space Shuttle program, such as the simulation of external tank re-entry trajectories. Other potential applications would be global circulation and diffusion studies, and generating profiles for comparison with other atmospheric measurement techniques, such as satellite measured temperature profiles and infrasonic measurement of wind profiles. The program is an amalgamation of two empirical atmospheric models for the low (25km) and the high (90km) atmosphere, with a newly developed latitude-longitude dependent model for the middle atmosphere. The high atmospheric region above 115km is simulated entirely by the Jacchia (1970) model. The Jacchia program sections are in separate subroutines so that other thermosphericexospheric models could easily be adapted if required for special applications. The atmospheric region between 30km and 90km is simulated by a latitude-longitude dependent empirical model modification of the latitude dependent empirical model of Groves (1971). Between 90km and 115km a smooth transition between the modified Groves values and the Jacchia values is accomplished by a fairing technique. Below 25km the atmospheric parameters are computed by the 4-D worldwide atmospheric model of Spiegler and Fowler (1972). This data set is not included. Between 25km and 30km an interpolation scheme is used between the 4-D results and the modified Groves values. The output parameters consist of components for: (1) latitude, longitude, and altitude dependent monthly and annual means, (2) quasi-biennial oscillations (QBO), and (3) random perturbations to partially simulate the variability due to synoptic, diurnal, planetary wave, and gravity wave variations. Quasi-biennial and random variation perturbations are computed from parameters determined by various empirical studies and are added to the monthly mean values. The UNIVAC version of GRAM is written in UNIVAC FORTRAN and has been implemented on a UNIVAC 1110 under control of EXEC 8 with a central memory requirement of approximately 30K of 36 bit words. The GRAM program was developed in 1976 and GRAM-86 was released in 1986. The monthly data files were last updated in 1986. The DEC VAX version of GRAM is written in FORTRAN 77 and has been implemented on a DEC VAX 11/780 under control of VMS 4.X with a central memory requirement of approximately 100K of 8 bit bytes. The GRAM program was originally developed in 1976 and later converted to the VAX in 1986 (GRAM-86). The monthly data files were last updated in 1986.

  13. Identification of Multiple Nonreturner Profiles to Inform the Development of Targeted College Retention Interventions

    ERIC Educational Resources Information Center

    Mattern, Krista D.; Marini, Jessica P.; Shaw, Emily J.

    2015-01-01

    Throughout the college retention literature, there is a recurring theme that students leave college for a variety of reasons making retention a difficult phenomenon to model. In the current study, cluster analysis techniques were employed to investigate whether multiple empirically based profiles of nonreturning students existed to more fully…

  14. An empirical propellant response function for combustion stability predictions

    NASA Technical Reports Server (NTRS)

    Hessler, R. O.

    1980-01-01

    An empirical response function model was developed for ammonium perchlorate propellants to supplant T-burner testing at the preliminary design stage. The model was developed by fitting a limited T-burner data base, in terms of oxidizer size and concentration, to an analytical two parameter response function expression. Multiple peaks are predicted, but the primary effect is of a single peak for most formulations, with notable bulges for the various AP size fractions. The model was extended to velocity coupling with the assumption that dynamic response was controlled primarily by the solid phase described by the two parameter model. The magnitude of velocity coupling was then scaled using an erosive burning law. Routine use of the model for stability predictions on a number of propulsion units indicates that the model tends to overpredict propellant response. It is concluded that the model represents a generally conservative prediction tool, suited especially for the preliminary design stage when T-burner data may not be readily available. The model work included development of a rigorous summation technique for pseudopropellant properties and of a concept for modeling ordered packing of particulates.

  15. Detonation Properties Measurements for Inorganic Explosives

    NASA Astrophysics Data System (ADS)

    Morgan, Brent A.; Lopez, Angel

    2005-03-01

    Many commonly available explosive materials have never been quantitatively or theoretically characterized in a manner suitable for use in analytical models. This includes inorganic explosive materials used in spacecraft ordnance, such as zirconium potassium perchlorate (ZPP). Lack of empirical information about these materials impedes the development of computational techniques. We have applied high fidelity measurement techniques to experimentally determine the pressure and velocity characteristics of ZPP, a previously uncharacterized explosive material. Advances in measurement technology now permit the use of very small quantities of material, thus yielding a significant reduction in the cost of conducting these experiments. An empirical determination of the explosive behavior of ZPP derived a Hugoniot for ZPP with an approximate particle velocity (uo) of 1.0 km/s. This result compares favorably with the numerical calculations from the CHEETAH thermochemical code, which predicts uo of approximately 1.2 km/s under ideal conditions.

  16. Airborne electromagnetic bathymetry investigations in Port Lincoln, South Australia - comparison with an equivalent floating transient electromagnetic system

    NASA Astrophysics Data System (ADS)

    Vrbancich, Julian

    2011-09-01

    Helicopter time-domain airborne electromagnetic (AEM) methodology is being investigated as a reconnaissance technique for bathymetric mapping in shallow coastal waters, especially in areas affected by water turbidity where light detection and ranging (LIDAR) and hyperspectral techniques may be limited. Previous studies in Port Lincoln, South Australia, used a floating AEM time-domain system to provide an upper limit to the expected bathymetric accuracy based on current technology for AEM systems. The survey lines traced by the towed floating system were also flown with an airborne system using the same transmitter and receiver electronic instrumentation, on two separate occasions. On the second occasion, significant improvements had been made to the instrumentation to reduce the system self-response at early times. A comparison of the interpreted water depths obtained from the airborne and floating systems is presented, showing the degradation in bathymetric accuracy obtained from the airborne data. An empirical data correction method based on modelled and observed EM responses over deep seawater (i.e. a quasi half-space response) at varying survey altitudes, combined with known seawater conductivity measured during the survey, can lead to significant improvements in interpreted water depths and serves as a useful method for checking system calibration. Another empirical data correction method based on observed and modelled EM responses in shallow water was shown to lead to similar improvements in interpreted water depths; however, this procedure is notably inferior to the quasi half-space response because more parameters need to be assumed in order to compute the modelled EM response. A comparison between the results of the two airborne surveys in Port Lincoln shows that uncorrected data obtained from the second airborne survey gives good agreement with known water depths without the need to apply any empirical corrections to the data. This result significantly decreases the data-processing time thereby enabling the AEM method to serve as a rapid reconnaissance technique for bathymetric mapping.

  17. Modeling of the strong ground motion of 25th April 2015 Nepal earthquake using modified semi-empirical technique

    NASA Astrophysics Data System (ADS)

    Lal, Sohan; Joshi, A.; Sandeep; Tomer, Monu; Kumar, Parveen; Kuo, Chun-Hsiang; Lin, Che-Min; Wen, Kuo-Liang; Sharma, M. L.

    2018-05-01

    On 25th April, 2015 a hazardous earthquake of moment magnitude 7.9 occurred in Nepal. Accelerographs were used to record the Nepal earthquake which is installed in the Kumaon region in the Himalayan state of Uttrakhand. The distance of the recorded stations in the Kumaon region from the epicenter of the earthquake is about 420-515 km. Modified semi-empirical technique of modeling finite faults has been used in this paper to simulate strong earthquake at these stations. Source parameters of the Nepal aftershock have been also calculated using the Brune model in the present study which are used in the modeling of the Nepal main shock. The obtained value of the seismic moment and stress drop is 8.26 × 1025 dyn cm and 10.48 bar, respectively, for the aftershock from the Brune model .The simulated earthquake time series were compared with the observed records of the earthquake. The comparison of full waveform and its response spectra has been made to finalize the rupture parameters and its location. The rupture of the earthquake was propagated in the NE-SW direction from the hypocenter with the rupture velocity 3.0 km/s from a distance of 80 km from Kathmandu in NW direction at a depth of 12 km as per compared results.

  18. Improved Model Fitting for the Empirical Green's Function Approach Using Hierarchical Models

    NASA Astrophysics Data System (ADS)

    Van Houtte, Chris; Denolle, Marine

    2018-04-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study examines a variety of model-fitting methods and shows that the choice of method can explain some of the discrepancy. The preferred method is Bayesian hierarchical modeling, which can reduce bias, better quantify uncertainties, and allow additional effects to be resolved. Two case study earthquakes are examined, the 2016 MW7.1 Kumamoto, Japan earthquake and a MW5.3 aftershock of the 2016 MW7.8 Kaikōura earthquake. By using hierarchical models, the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be retrieved without overfitting the data. Other methods commonly used to calculate corner frequencies may give substantial biases. In particular, if fc was calculated for the Kumamoto earthquake using an ω-square model, the obtained fc could be twice as large as a realistic value.

  19. Nonlinear Modeling of Joint Dominated Structures

    NASA Technical Reports Server (NTRS)

    Chapman, J. M.

    1990-01-01

    The development and verification of an accurate structural model of the nonlinear joint-dominated NASA Langley Mini-Mast truss are described. The approach is to characterize the structural behavior of the Mini-Mast joints and struts using a test configuration that can directly measure the struts' overall stiffness and damping properties, incorporate this data into the structural model using the residual force technique, and then compare the predicted response with empirical data taken by NASA/LaRC during the modal survey tests of the Mini-Mast. A new testing technique, referred to as 'link' testing, was developed and used to test prototype struts of the Mini-Masts. Appreciable nonlinearities including the free-play and hysteresis were demonstrated. Since static and dynamic tests performed on the Mini-Mast also exhibited behavior consistent with joints having free-play and hysteresis, nonlinear models of the Mini-Mast were constructed and analyzed. The Residual Force Technique was used to analyze the nonlinear model of the Mini-Mast having joint free-play and hysteresis.

  20. Acoustic classification of zooplankton

    NASA Astrophysics Data System (ADS)

    Martin Traykovski, Linda V.

    1998-11-01

    Work on the forward problem in zooplankton bioacoustics has resulted in the identification of three categories of acoustic scatterers: elastic-shelled (e.g. pteropods), fluid-like (e.g. euphausiids), and gas-bearing (e.g. siphonophores). The relationship between backscattered energy and animal biomass has been shown to vary by a factor of ~19,000 across these categories, so that to make accurate estimates of zooplankton biomass from acoustic backscatter measurements of the ocean, the acoustic characteristics of the species of interest must be well-understood. This thesis describes the development of both feature based and model based classification techniques to invert broadband acoustic echoes from individual zooplankton for scatterer type, as well as for particular parameters such as animal orientation. The feature based Empirical Orthogonal Function Classifier (EOFC) discriminates scatterer types by identifying characteristic modes of variability in the echo spectra, exploiting only the inherent characteristic structure of the acoustic signatures. The model based Model Parameterisation Classifier (MPC) classifies based on correlation of observed echo spectra with simplified parameterisations of theoretical scattering models for the three classes. The Covariance Mean Variance Classifiers (CMVC) are a set of advanced model based techniques which exploit the full complexity of the theoretical models by searching the entire physical model parameter space without employing simplifying parameterisations. Three different CMVC algorithms were developed: the Integrated Score Classifier (ISC), the Pairwise Score Classifier (PSC) and the Bayesian Probability Classifier (BPC); these classifiers assign observations to a class based on similarities in covariance, mean, and variance, while accounting for model ambiguity and validity. These feature based and model based inversion techniques were successfully applied to several thousand echoes acquired from broadband (~350 kHz-750 kHz) insonifications of live zooplankton collected on Georges Bank and the Gulf of Maine to determine scatterer class. CMVC techniques were also applied to echoes from fluid-like zooplankton (Antarctic krill) to invert for angle of orientation using generic and animal-specific theoretical and empirical models. Application of these inversion techniques in situ will allow correct apportionment of backscattered energy to animal biomass, significantly improving estimates of zooplankton biomass based on acoustic surveys. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  1. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  2. Cleaning up with genomics: applying molecular biology to bioremediation.

    PubMed

    Lovley, Derek R

    2003-10-01

    Bioremediation has the potential to restore contaminated environments inexpensively yet effectively, but a lack of information about the factors controlling the growth and metabolism of microorganisms in polluted environments often limits its implementation. However, rapid advances in the understanding of bioremediation are on the horizon. Researchers now have the ability to culture microorganisms that are important in bioremediation and can evaluate their physiology using a combination of genome-enabled experimental and modelling techniques. In addition, new environmental genomic techniques offer the possibility for similar studies on as-yet-uncultured organisms. Combining models that can predict the activity of microorganisms that are involved in bioremediation with existing geochemical and hydrological models should transform bioremediation from a largely empirical practice into a science.

  3. GRAM 88 - 4D GLOBAL REFERENCE ATMOSPHERE MODEL-1988

    NASA Technical Reports Server (NTRS)

    Johnson, D. L.

    1994-01-01

    The Four-D Global Reference Atmosphere program was developed from an empirical atmospheric model which generates values for pressure, density, temperature, and winds from surface level to orbital altitudes. This program can generate altitude profiles of atmospheric parameters along any simulated trajectory through the atmosphere. The program was developed for design applications in the Space Shuttle program, such as the simulation of external tank re-entry trajectories. Other potential applications are global circulation and diffusion studies; also the generation of profiles for comparison with other atmospheric measurement techniques such as satellite measured temperature profiles and infrasonic measurement of wind profiles. GRAM-88 is the latest version of the software GRAM. The software GRAM-88 contains a number of changes that have improved the model statistics, in particular, the small scale density perturbation statistics. It also corrected a low latitude grid problem as well as the SCIDAT data base. Furthermore, GRAM-88 now uses the U.S. Standard Atmosphere 1976 as a comparison standard rather than the US62 used in other versions. The program is an amalgamation of two empirical atmospheric models for the low (25km) and the high (90km) atmosphere, with a newly developed latitude-longitude dependent model for the middle atmosphere. The Jacchia (1970) model simulates the high atmospheric region above 115km. The Jacchia program sections are in separate subroutines so that other thermosphericexospheric models could easily be adapted if required for special applications. The improved code eliminated the calculation of geostrophic winds above 125 km altitude from the model. The atmospheric region between 30km and 90km is simulated by a latitude-longitude dependent empirical model modification of the latitude dependent empirical model of Groves (1971). A fairing technique between 90km and 115km accomplished a smooth transition between the modified Groves values and the Jacchia values. Below 25km the atmospheric parameters are computed by the 4-D worldwide atmospheric model of Spiegler and Fowler (1972). This data set is not included. GRAM-88 incorporates a hydrostatic/gas law check in the 0-30 km altitude range to flag and change any bad data points. Between 5km and 30km, an interpolation scheme is used between the 4-D results and the modified Groves values. The output parameters consist of components for: (1) latitude, longitude, and altitude dependent monthly and annual means, (2) quasi-biennial oscillations (QBO), and (3) random perturbations to partially simulate the variability due to synoptic, diurnal, planetary wave, and gravity wave variations. Quasi-biennial and random variation perturbations are computed from parameters determined by various empirical studies and are added to the monthly mean values. The GRAM-88 program is for batch execution on the IBM 3084. It is written in STANDARD FORTRAN 77 under the MVS/XA operating system. The IBM DISPLA graphics routines are necessary for graphical output. The program was developed in 1988.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  5. Evaluation of a simplified gross thrust calculation technique using two prototype F100 turbofan engines in an altitude facility

    NASA Technical Reports Server (NTRS)

    Kurtenbach, F. J.

    1979-01-01

    The technique which relies on afterburner duct pressure measurements and empirical corrections to an ideal one dimensional flow analysis to determine thrust is presented. A comparison of the calculated and facility measured thrust values is reported. The simplified model with the engine manufacturer's gas generator model are compared. The evaluation was conducted over a range of Mach numbers from 0.80 to 2.00 and at altitudes from 4020 meters to 15,240 meters. The effects of variations in inlet total temperature from standard day conditions were explored. Engine conditions were varied from those normally scheduled for flight. The technique was found to be accurate to a twice standard deviation of 2.89 percent, with accuracy a strong function of afterburner duct pressure difference.

  6. Military Spending and Economic Well-Being in the American States: The Post-Vietnam War Era

    ERIC Educational Resources Information Center

    Borch, Casey; Wallace, Michael

    2010-01-01

    Using growth curve modeling techniques, this research investigates whether military spending improved or worsened the economic well-being of citizens within the American states during the post-Vietnam War period. We empirically test the military Keynesianism claim that military spending improves the economic conditions of citizens through its use…

  7. Comparing timber and lumber from plantation and natural stands of ponderosa pine

    Treesearch

    Eini C. Lowell; Christine L. Todoroki; Ed. Thomas

    2009-01-01

    Data derived from empirical studies, coupled with modeling and simulation techniques, were used to compare tree and product quality from two stands of small-diameter ponderosa pine trees growing in northern California: one plantation, the other natural. The plantation had no management following establishment, and the natural stand had no active management. Fifty trees...

  8. Modeling wildland fire propagation with level set methods

    Treesearch

    V. Mallet; D.E Keyes; F.E. Fendell

    2009-01-01

    Level set methods are versatile and extensible techniques for general front tracking problems, including the practically important problem of predicting the advance of a fire front across expanses of surface vegetation. Given a rule, empirical or otherwise, to specify the rate of advance of an infinitesimal segment of fire front arc normal to itself (i.e., given the...

  9. Developing entrepreneurial competencies for successful business model canvas

    NASA Astrophysics Data System (ADS)

    Sundah, D. I. E.; Langi, C.; Maramis, D. R. S.; Tawalujan, L. dan

    2018-01-01

    We explore the competencies of entrepreneurship that contribute to business model canvas. This research conducted at smoked fish industries in Province of North Sulawesi, Indonesia. This research used a mixed method which integrating both quantitative and qualitative approaches in a sequential design. The technique of snowball sampling and questionnaire has been used in collecting data from 44 entrepreneurs. Structural equation modeling with SmartPLS application program has been used in analyzing this data to determine the effect of entrepreneurial competencies on business model canvas. We also investigate 3 entrepreneurs who conducted smoked fish business and analyzed their business by using business model canvas. Focus Group Discussion is used in collecting data from 2 groups of entrepreneurs from 2 different locations. The empirical results show that entrepreneurial competencies which consists of managerial competencies, technical competencies, marketing competencies, financial competencies, human relations competencies, and the specific working attitude of entrepreneur has a positive and significantly effect on business model canvas. Additionally, the empirical cases and discussion with 2 groups of entrepreneurs support the quantitative result and it found that human relations competencies have greater influence in achieving successful business model canvas.

  10. Bayesian multi-scale smoothing of photon-limited images with applications to astronomy and medicine

    NASA Astrophysics Data System (ADS)

    White, John

    Multi-scale models for smoothing Poisson signals or images have gained much attention over the past decade. A new Bayesian model is developed using the concept of the Chinese restaurant process to find structures in two-dimensional images when performing image reconstruction or smoothing. This new model performs very well when compared to other leading methodologies for the same problem. It is developed and evaluated theoretically and empirically throughout Chapter 2. The newly developed Bayesian model is extended to three-dimensional images in Chapter 3. The third dimension has numerous different applications, such as different energy spectra, another spatial index, or possibly a temporal dimension. Empirically, this method shows promise in reducing error with the use of simulation studies. A further development removes background noise in the image. This removal can further reduce the error and is done using a modeling adjustment and post-processing techniques. These details are given in Chapter 4. Applications to real world problems are given throughout. Photon-based images are common in astronomical imaging due to the collection of different types of energy such as X-Rays. Applications to real astronomical images are given, and these consist of X-ray images from the Chandra X-ray observatory satellite. Diagnostic medicine uses many types of imaging such as magnetic resonance imaging and computed tomography that can also benefit from smoothing techniques such as the one developed here. Reducing the amount of radiation a patient takes will make images more noisy, but this can be mitigated through the use of image smoothing techniques. Both types of images represent the potential real world use for these methods.

  11. Use of advanced modeling techniques to optimize thermal packaging designs.

    PubMed

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed during its validation. Thermal packaging is routinely used by the pharmaceutical industry to provide passive and active temperature control of their thermally sensitive products from manufacture through end use (termed the cold chain). In this study, the authors focus on passive temperature control (passive control does not require any external energy source and is entirely based on specific and/or latent heat of shipper components). As temperature-sensitive pharmaceuticals are being transported over longer distances, cold chain reliability is essential. To achieve reliability, a significant amount of time and resources must be invested in design, test, and production of optimized temperature-controlled packaging solutions. To shorten the cumbersome trial and error approach (design/test/design/test …), computer simulation (virtual prototyping and testing of thermal shippers) is a promising method. Although several companies have attempted to develop such a tool, there has been limited success to date. Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a coupled conductive/convective-based thermal shipper. A modeling technique capable of correctly capturing shipper thermal behavior can be used to develop packaging designs more quickly, reducing up-front costs while also improving shipper performance.

  12. Chemical Explosion Experiments to Improve Nuclear Test Monitoring [Developing a New Paradigm for Nuclear Test Monitoring with the Source Physics Experiments (SPE)

    DOE PAGES

    Snelson, Catherine M.; Abbott, Robert E.; Broome, Scott T.; ...

    2013-07-02

    A series of chemical explosions, called the Source Physics Experiments (SPE), is being conducted under the auspices of the U.S. Department of Energy’s National Nuclear Security Administration (NNSA) to develop a new more physics-based paradigm for nuclear test monitoring. Currently, monitoring relies on semi-empirical models to discriminate explosions from earthquakes and to estimate key parameters such as yield. While these models have been highly successful monitoring established test sites, there is concern that future tests could occur in media and at scale depths of burial outside of our empirical experience. This is highlighted by North Korean tests, which exhibit poormore » performance of a reliable discriminant, mb:Ms (Selby et al., 2012), possibly due to source emplacement and differences in seismic responses for nascent and established test sites. The goal of SPE is to replace these semi-empirical relationships with numerical techniques grounded in a physical basis and thus applicable to any geologic setting or depth.« less

  13. Semi-Empirical Validation of the Cross-Band Relative Absorption Technique for the Measurement of Molecular Mixing Ratios

    NASA Technical Reports Server (NTRS)

    Pliutau, Denis; Prasad, Narasimha S

    2013-01-01

    Studies were performed to carry out semi-empirical validation of a new measurement approach we propose for molecular mixing ratios determination. The approach is based on relative measurements in bands of O2 and other molecules and as such may be best described as cross band relative absorption (CoBRA). . The current validation studies rely upon well verified and established theoretical and experimental databases, satellite data assimilations and modeling codes such as HITRAN, line-by-line radiative transfer model (LBLRTM), and the modern-era retrospective analysis for research and applications (MERRA). The approach holds promise for atmospheric mixing ratio measurements of CO2 and a variety of other molecules currently under investigation for several future satellite lidar missions. One of the advantages of the method is a significant reduction of the temperature sensitivity uncertainties which is illustrated with application to the ASCENDS mission for the measurement of CO2 mixing ratios (XCO2). Additional advantages of the method include the possibility to closely match cross-band weighting function combinations which is harder to achieve using conventional differential absorption techniques and the potential for additional corrections for water vapor and other interferences without using the data from numerical weather prediction (NWP) models.

  14. Hybrid machine learning technique for forecasting Dhaka stock market timing decisions.

    PubMed

    Banik, Shipra; Khodadad Khan, A F M; Anwer, Mohammad

    2014-01-01

    Forecasting stock market has been a difficult job for applied researchers owing to nature of facts which is very noisy and time varying. However, this hypothesis has been featured by several empirical experiential studies and a number of researchers have efficiently applied machine learning techniques to forecast stock market. This paper studied stock prediction for the use of investors. It is always true that investors typically obtain loss because of uncertain investment purposes and unsighted assets. This paper proposes a rough set model, a neural network model, and a hybrid neural network and rough set model to find optimal buy and sell of a share on Dhaka stock exchange. Investigational findings demonstrate that our proposed hybrid model has higher precision than the single rough set model and the neural network model. We believe this paper findings will help stock investors to decide about optimal buy and/or sell time on Dhaka stock exchange.

  15. Hybrid Machine Learning Technique for Forecasting Dhaka Stock Market Timing Decisions

    PubMed Central

    Banik, Shipra; Khodadad Khan, A. F. M.; Anwer, Mohammad

    2014-01-01

    Forecasting stock market has been a difficult job for applied researchers owing to nature of facts which is very noisy and time varying. However, this hypothesis has been featured by several empirical experiential studies and a number of researchers have efficiently applied machine learning techniques to forecast stock market. This paper studied stock prediction for the use of investors. It is always true that investors typically obtain loss because of uncertain investment purposes and unsighted assets. This paper proposes a rough set model, a neural network model, and a hybrid neural network and rough set model to find optimal buy and sell of a share on Dhaka stock exchange. Investigational findings demonstrate that our proposed hybrid model has higher precision than the single rough set model and the neural network model. We believe this paper findings will help stock investors to decide about optimal buy and/or sell time on Dhaka stock exchange. PMID:24701205

  16. Modeling the Malaysian motor insurance claim using artificial neural network and adaptive NeuroFuzzy inference system

    NASA Astrophysics Data System (ADS)

    Mohd Yunos, Zuriahati; Shamsuddin, Siti Mariyam; Ismail, Noriszura; Sallehuddin, Roselina

    2013-04-01

    Artificial neural network (ANN) with back propagation algorithm (BP) and ANFIS was chosen as an alternative technique in modeling motor insurance claims. In particular, an ANN and ANFIS technique is applied to model and forecast the Malaysian motor insurance data which is categorized into four claim types; third party property damage (TPPD), third party bodily injury (TPBI), own damage (OD) and theft. This study is to determine whether an ANN and ANFIS model is capable of accurately predicting motor insurance claim. There were changes made to the network structure as the number of input nodes, number of hidden nodes and pre-processing techniques are also examined and a cross-validation technique is used to improve the generalization ability of ANN and ANFIS models. Based on the empirical studies, the prediction performance of the ANN and ANFIS model is improved by using different number of input nodes and hidden nodes; and also various sizes of data. The experimental results reveal that the ANFIS model has outperformed the ANN model. Both models are capable of producing a reliable prediction for the Malaysian motor insurance claims and hence, the proposed method can be applied as an alternative to predict claim frequency and claim severity.

  17. Development of an empirical mathematical model for describing and optimizing the hygiene potential of a thermophilic anaerobic bioreactor treating faeces.

    PubMed

    Lübken, M; Wichern, M; Bischof, F; Prechtl, S; Horn, H

    2007-01-01

    Poor sanitation and insufficient disposal of sewage and faeces are primarily responsible for water associated health problems in developing countries. Domestic sewage and faeces are prevalently discharged into surface waters which are used by the inhabitants as a source for drinking water. This paper presents a decentralized anaerobic process technique for handling of such domestic organic waste. Such an efficient and compact system for treating faeces and food waste may be of great benefit for developing countries. Besides a stable biogas production for energy generation, the reduction of bacterial pathogens is of particular importance. In our research we investigated the removal capacity of the reactor concerning pathogens, which has been operated under thermophilic conditions. Faecal coliforms and intestinal enterococci have been detected as indicator organisms for bacterial pathogens. By the multiple regression analysis technique an empirical mathematical model has been developed. The model shows a high correlation between removal efficiency and both, hydraulic retention time (HRT) and temperature. By this model an optimized HRT for defined bacterial pathogens effluent standards can be easily calculated. Thus, hygiene potential can be evaluated along with economic aspects. In this paper not only results for describing the hygiene potential of a thermophilic anaerobic bioreactor are presented, but also an exemplary method to draw the right conclusions out of biological tests with the aid of mathematical tools.

  18. Empirical Reconstruction and Numerical Modeling of the First Geoeffective Coronal Mass Ejection of Solar Cycle 24

    NASA Astrophysics Data System (ADS)

    Wood, B. E.; Wu, C.-C.; Howard, R. A.; Socker, D. G.; Rouillard, A. P.

    2011-03-01

    We analyze the kinematics and morphology of a coronal mass ejection (CME) from 2010 April 3, which was responsible for the first significant geomagnetic storm of solar cycle 24. The analysis utilizes coronagraphic and heliospheric images from the two STEREO spacecraft, and coronagraphic images from SOHO/LASCO. Using an empirical three-dimensional (3D) reconstruction technique, we demonstrate that the CME can be reproduced reasonably well at all times with a 3D flux rope shape, but the case for a flux rope being the correct interpretation is not as strong as some events studied with STEREO in the past, given that we are unable to infer a unique orientation for the flux rope. A model with an orientation angle of -80° from the ecliptic plane (i.e., nearly N-S) works best close to the Sun, but a model at 10° (i.e., nearly E-W) works better far from the Sun. Both interpretations require the cross section of the flux rope to be significantly elliptical rather than circular. In addition to our empirical modeling, we also present a fully 3D numerical MHD model of the CME. This physical model appears to effectively reproduce aspects of the shape and kinematics of the CME's leading edge. It is particularly encouraging that the model reproduces the amount of interplanetary deceleration observed for the CME during its journey from the Sun to 1 AU.

  19. Eastern approaches for enhancing women's sexuality: mindfulness, acupuncture, and yoga (CME).

    PubMed

    Brotto, Lori A; Krychman, Michael; Jacobson, Pamela

    2008-12-01

    A significant proportion of women report unsatisfying sexual experiences despite no obvious difficulties in the traditional components of sexual response (desire, arousal, and orgasm). Some suggest that nongoal-oriented spiritual elements to sexuality might fill the gap that more contemporary forms of treatment are not addressing. Eastern techniques including mindfulness, acupuncture, and yoga, are Eastern techniques, which have been applied to women's sexuality. Here, we review the literature on their efficacy. Our search revealed two empirical studies of mindfulness, two of acupuncture, and one of yoga in the treatment of sexual dysfunction. Literature review of empirical sources. Mindfulness significantly improves several aspects of sexual response and reduces sexual distress in women with sexual desire and arousal disorders. In women with provoked vestibulodynia, acupuncture significantly reduces pain and improves quality of life. There is also a case series of acupuncture significantly improving desire among women with hypoactive sexual desire disorder. Although yoga has only been empirically examined and found to be effective for treating sexual dysfunction (premature ejaculation) in men, numerous historical books cite benefits of yoga for women's sexuality. The empirical literature supporting Eastern techniques, such as mindfulness, acupuncture, and yoga, for women's sexual complaints and loss of satisfaction is sparse but promising. Future research should aim to empirically support Eastern techniques in women's sexuality.

  20. Effects of bearing cleaning and lube environment on bearing performance

    NASA Technical Reports Server (NTRS)

    Ward, Peter C.

    1995-01-01

    Running torque data of SR6 ball bearings are presented for different temperatures and speeds. The data are discussed in contrast to generally used torque prediction models and point out the need to obtain empirical data in critical applications. Also, the effects of changing bearing washing techniques from old, universally used CFC-based systems to CFC-free aqueous/alkaline solutions are discussed. Data on wettability, torque and lubricant life using SR3 ball bearings are presented. In general, performance is improved using the new aqueous washing techniques.

  1. An evaluation of dynamic mutuality measurements and methods in cyclic time series

    NASA Astrophysics Data System (ADS)

    Xia, Xiaohua; Huang, Guitian; Duan, Na

    2010-12-01

    Several measurements and techniques have been developed to detect dynamic mutuality and synchronicity of time series in econometrics. This study aims to compare the performances of five methods, i.e., linear regression, dynamic correlation, Markov switching models, concordance index and recurrence quantification analysis, through numerical simulations. We evaluate the abilities of these methods to capture structure changing and cyclicity in time series and the findings of this paper would offer guidance to both academic and empirical researchers. Illustration examples are also provided to demonstrate the subtle differences of these techniques.

  2. A Service Design Thinking Approach for Stakeholder-Centred eHealth.

    PubMed

    Lee, Eunji

    2016-01-01

    Studies have described the opportunities and challenges of applying service design techniques to health services, but empirical evidence on how such techniques can be implemented in the context of eHealth services is still lacking. This paper presents how a service design thinking approach can be applied for specification of an existing and new eHealth service by supporting evaluation of the current service and facilitating suggestions for the future service. We propose Service Journey Modelling Language and Service Journey Cards to engage stakeholders in the design of eHealth services.

  3. Empirical Mode Decomposition and k-Nearest Embedding Vectors for Timely Analyses of Antibiotic Resistance Trends

    PubMed Central

    Teodoro, Douglas; Lovis, Christian

    2013-01-01

    Background Antibiotic resistance is a major worldwide public health concern. In clinical settings, timely antibiotic resistance information is key for care providers as it allows appropriate targeted treatment or improved empirical treatment when the specific results of the patient are not yet available. Objective To improve antibiotic resistance trend analysis algorithms by building a novel, fully data-driven forecasting method from the combination of trend extraction and machine learning models for enhanced biosurveillance systems. Methods We investigate a robust model for extraction and forecasting of antibiotic resistance trends using a decade of microbiology data. Our method consists of breaking down the resistance time series into independent oscillatory components via the empirical mode decomposition technique. The resulting waveforms describing intrinsic resistance trends serve as the input for the forecasting algorithm. The algorithm applies the delay coordinate embedding theorem together with the k-nearest neighbor framework to project mappings from past events into the future dimension and estimate the resistance levels. Results The algorithms that decompose the resistance time series and filter out high frequency components showed statistically significant performance improvements in comparison with a benchmark random walk model. We present further qualitative use-cases of antibiotic resistance trend extraction, where empirical mode decomposition was applied to highlight the specificities of the resistance trends. Conclusion The decomposition of the raw signal was found not only to yield valuable insight into the resistance evolution, but also to produce novel models of resistance forecasters with boosted prediction performance, which could be utilized as a complementary method in the analysis of antibiotic resistance trends. PMID:23637796

  4. Debates as a Pedagogical Learning Technique: Empirical Research with Business Students

    ERIC Educational Resources Information Center

    Rao, Pramila

    2010-01-01

    Purpose: The purpose of this paper is to enhance knowledge on debates as a pedagogical learning technique. Design/methodology/approach: This empirical research was conducted in a northeastern university in the USA on graduate and undergraduate business students taking human resource management (HRM) classes. This research was conducted in the…

  5. New analytical technique for carbon dioxide absorption solvents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pouryousefi, F.; Idem, R.O.

    2008-02-15

    The densities and refractive indices of two binary systems (water + MEA and water + MDEA) and three ternary systems (water + MEA + CO{sub 2}, water + MDEA + CO{sub 2}, and water + MEA + MDEA) used for carbon dioxide (CO{sub 2}) capture were measured over the range of compositions of the aqueous alkanolamine(s) used for CO{sub 2} absorption at temperatures from 295 to 338 K. Experimental densities were modeled empirically, while the experimental refractive indices were modeled using well-established models from the known values of their pure-component densities and refractive indices. The density and Gladstone-Dale refractive indexmore » models were then used to obtain the compositions of unknown samples of the binary and ternary systems by simultaneous solution of the density and refractive index equations. The results from this technique have been compared with HPLC (high-performance liquid chromatography) results, while a third independent technique (acid-base titration) was used to verify the results. The results show that the systems' compositions obtained from the simple and easy-to-use refractive index/density technique were very comparable to the expensive and laborious HPLC/titration techniques, suggesting that the refractive index/density technique can be used to replace existing methods for analysis of fresh or nondegraded, CO{sub 2}-loaded, single and mixed alkanolamine solutions.« less

  6. How much detail is needed in modeling a transcranial magnetic stimulation figure-8 coil: Measurements and brain simulations

    PubMed Central

    Mandija, Stefano; Sommer, Iris E. C.; van den Berg, Cornelis A. T.; Neggers, Sebastiaan F. W.

    2017-01-01

    Background Despite TMS wide adoption, its spatial and temporal patterns of neuronal effects are not well understood. Although progress has been made in predicting induced currents in the brain using realistic finite element models (FEM), there is little consensus on how a magnetic field of a typical TMS coil should be modeled. Empirical validation of such models is limited and subject to several limitations. Methods We evaluate and empirically validate models of a figure-of-eight TMS coil that are commonly used in published modeling studies, of increasing complexity: simple circular coil model; coil with in-plane spiral winding turns; and finally one with stacked spiral winding turns. We will assess the electric fields induced by all 3 coil models in the motor cortex using a computer FEM model. Biot-Savart models of discretized wires were used to approximate the 3 coil models of increasing complexity. We use a tailored MR based phase mapping technique to get a full 3D validation of the incident magnetic field induced in a cylindrical phantom by our TMS coil. FEM based simulations on a meshed 3D brain model consisting of five tissues types were performed, using two orthogonal coil orientations. Results Substantial differences in the induced currents are observed, both theoretically and empirically, between highly idealized coils and coils with correctly modeled spiral winding turns. Thickness of the coil winding turns affect minimally the induced electric field, and it does not influence the predicted activation. Conclusion TMS coil models used in FEM simulations should include in-plane coil geometry in order to make reliable predictions of the incident field. Modeling the in-plane coil geometry is important to correctly simulate the induced electric field and to correctly make reliable predictions of neuronal activation PMID:28640923

  7. The Predicaments of Non-Residential Students in Ghanaian Institutions of Higher Education: A Micro-Level Empirical Evidence

    ERIC Educational Resources Information Center

    Addai, Isaac

    2015-01-01

    This paper in the field of capacity building and students' affairs used the external survey assessment techniques of the probit model to examine the predicaments of non-resident students of the College of Technology Education, University of Education, Winneba. Considering the very limited residential facilities and the growing demand for tertiary…

  8. Using small area estimation and Lidar-derived variables for multivariate prediction of forest attributes

    Treesearch

    F. Mauro; Vicente Monleon; H. Temesgen

    2015-01-01

    Small area estimation (SAE) techniques have been successfully applied in forest inventories to provide reliable estimates for domains where the sample size is small (i.e. small areas). Previous studies have explored the use of either Area Level or Unit Level Empirical Best Linear Unbiased Predictors (EBLUPs) in a univariate framework, modeling each variable of interest...

  9. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  10. Microwave Remote Sensing Modeling of Ocean Surface Salinity and Winds Using an Empirical Sea Surface Spectrum

    NASA Technical Reports Server (NTRS)

    Yueh, Simon H.

    2004-01-01

    Active and passive microwave remote sensing techniques have been investigated for the remote sensing of ocean surface wind and salinity. We revised an ocean surface spectrum using the CMOD-5 geophysical model function (GMF) for the European Remote Sensing (ERS) C-band scatterometer and the Ku-band GMF for the NASA SeaWinds scatterometer. The predictions of microwave brightness temperatures from this model agree well with satellite, aircraft and tower-based microwave radiometer data. This suggests that the impact of surface roughness on microwave brightness temperatures and radar scattering coefficients of sea surfaces can be consistently characterized by a roughness spectrum, providing physical basis for using combined active and passive remote sensing techniques for ocean surface wind and salinity remote sensing.

  11. Volterra model of the parametric array loudspeaker operating at ultrasonic frequencies.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2016-11-01

    The parametric array loudspeaker (PAL) is an application of the parametric acoustic array in air, which can be applied to transmit a narrow audio beam from an ultrasonic emitter. However, nonlinear distortion is very perceptible in the audio beam. Modulation methods to reduce the nonlinear distortion are available for on-axis far-field applications. For other applications, preprocessing techniques are wanting. In order to develop a preprocessing technique with general applicability to a wide range of operating conditions, the Volterra filter is investigated as a nonlinear model of the PAL in this paper. Limitations of the standard audio-to-audio Volterra filter are elaborated. An improved ultrasound-to-ultrasound Volterra filter is proposed and empirically demonstrated to be a more generic Volterra model of the PAL.

  12. Landscape influences on dispersal behaviour: a theoretical model and empirical test using the fire salamander, Salamandra infraimmaculata.

    PubMed

    Kershenbaum, Arik; Blank, Lior; Sinai, Iftach; Merilä, Juha; Blaustein, Leon; Templeton, Alan R

    2014-06-01

    When populations reside within a heterogeneous landscape, isolation by distance may not be a good predictor of genetic divergence if dispersal behaviour and therefore gene flow depend on landscape features. Commonly used approaches linking landscape features to gene flow include the least cost path (LCP), random walk (RW), and isolation by resistance (IBR) models. However, none of these models is likely to be the most appropriate for all species and in all environments. We compared the performance of LCP, RW and IBR models of dispersal with the aid of simulations conducted on artificially generated landscapes. We also applied each model to empirical data on the landscape genetics of the endangered fire salamander, Salamandra infraimmaculata, in northern Israel, where conservation planning requires an understanding of the dispersal corridors. Our simulations demonstrate that wide dispersal corridors of the low-cost environment facilitate dispersal in the IBR model, but inhibit dispersal in the RW model. In our empirical study, IBR explained the genetic divergence better than the LCP and RW models (partial Mantel correlation 0.413 for IBR, compared to 0.212 for LCP, and 0.340 for RW). Overall dispersal cost in salamanders was also well predicted by landscape feature slope steepness (76%), and elevation (24%). We conclude that fire salamander dispersal is well characterised by IBR predictions. Together with our simulation findings, these results indicate that wide dispersal corridors facilitate, rather than hinder, salamander dispersal. Comparison of genetic data to dispersal model outputs can be a useful technique in inferring dispersal behaviour from population genetic data.

  13. Quantification of Neutral Wind Variability in the Upper Thermosphere

    NASA Technical Reports Server (NTRS)

    Richards, Philip G.

    2000-01-01

    The overall objective of this grant was to: 1) Quantify thermospheric neutral wind behavior in the ionosphere. This was to be achieved by developing an improved empirical wind model. 2) Validating the procedure for obtaining winds from the height of the peak density. 3) Improving the model capabilities and making updated versions of the model available to other scientists. The approach is to use neutral winds derived from ionosonde measurements of the height of the peak electron density (h(sub m)F(sub 2)). One of the proposed first year tasks was to perform some validation studies on the method. Substantial progress has been made with regard to both the empirical model and the validation study. Funding from this grant has also enabled a number of fruitful collaborations with other researchers; one of the stated aims in the proposal. Graduate student Mayra Martinez has developed the mathematical formulation for the empirical wind model as part of her dissertation. As proposed, authors continued validation studies of the technique for determining winds from h(sub m)F(sub 2). They are submitted a paper to the Journal of Geophysical Research in December 1996 entitled "Therinospheric neutral winds at southern mid-latitudes: comparison of optical and ionosonde h(sub m)F(sub 2) methods. A second paper entitled "Ionospheric behavior at a southern mid-latitude in March 1995" has come out of the March 1995 data set and was published in The Journal of Geophysical Research. A new algorithm was developed. The ionosphere also have been modeled.

  14. Regional Seismic Amplitude Modeling and Tomography for Earthquake-Explosion Discrimination

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Pasyanos, M. E.; Matzel, E.; Gok, R.; Sweeney, J.; Ford, S. R.; Rodgers, A. J.

    2008-12-01

    Empirically explosions have been discriminated from natural earthquakes using regional amplitude ratio techniques such as P/S in a variety of frequency bands. We demonstrate that such ratios discriminate nuclear tests from earthquakes using closely located pairs of earthquakes and explosions recorded on common, publicly available stations at test sites around the world (e.g. Nevada, Novaya Zemlya, Semipalatinsk, Lop Nor, India, Pakistan, and North Korea). We are examining if there is any relationship between the observed P/S and the point source variability revealed by longer period full waveform modeling. For example, regional waveform modeling shows strong tectonic release from the May 1998 India test, in contrast with very little tectonic release in the October 2006 North Korea test, but the P/S discrimination behavior appears similar in both events using the limited regional data available. While regional amplitude ratios such as P/S can separate events in close proximity, it is also empirically well known that path effects can greatly distort observed amplitudes and make earthquakes appear very explosion-like. Previously we have shown that the MDAC (Magnitude Distance Amplitude Correction, Walter and Taylor, 2001) technique can account for simple 1-D attenuation and geometrical spreading corrections, as well as magnitude and site effects. However in some regions 1-D path corrections are a poor approximation and we need to develop 2-D path corrections. Here we demonstrate a new 2-D attenuation tomography technique using the MDAC earthquake source model applied to a set of events and stations in both the Middle East and the Yellow Sea Korean Peninsula regions. We believe this new 2-D MDAC tomography has the potential to greatly improve earthquake-explosion discrimination, particularly in tectonically complex regions such as the Middle East.

  15. Analysis of transitional separation bubbles on infinite swept wings

    NASA Technical Reports Server (NTRS)

    Davis, R. L.; Carter, J. E.

    1986-01-01

    A previously developed two-dimensional local inviscid-viscous interaction technique for the analysis of airfoil transitional separation bubbles, ALESEP (Airfoil Leading Edge Separation), has been extended for the calculation of transitional separation bubbles over infinite swept wings. As part of this effort, Roberts' empirical correlation, which is interpreted as a separated flow empirical extension of Mack's stability theory for attached flows, has been incorporated into the ALESEP procedure for the prediction of the transition location within the separation bubble. In addition, the viscous procedure used in the ALESEP techniques has been modified to allow for wall suction. A series of two-dimensional calculations is presented as a verification of the prediction capability of the interaction techniques with the Roberts' transition model. Numerical tests have shown that this two-dimensional natural transition correlation may also be applied to transitional separation bubbles over infinite swept wings. Results of the interaction procedure are compared with Horton's detailed experimental data for separated flow over a swept plate which demonstrates the accuracy of the present technique. Wall suction has been applied to a similar interaction calculation to demonstrate its effect on the separation bubble. The principal conclusion of this paper is that the prediction of transitional separation bubbles over two-dimensional or infinite swept geometries is now possible using the present interacting boundary layer approach.

  16. A discrete element method-based approach to predict the breakage of coal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Varun; Sun, Xin; Xu, Wei

    Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less

  17. A discrete element method-based approach to predict the breakage of coal

    DOE PAGES

    Gupta, Varun; Sun, Xin; Xu, Wei; ...

    2017-08-05

    Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less

  18. The Past, Present and Future of Geodemographic Research in the United States and United Kingdom

    PubMed Central

    Singleton, Alexander D.; Spielman, Seth E.

    2014-01-01

    This article presents an extensive comparative review of the emergence and application of geodemographics in both the United States and United Kingdom, situating them as an extension of earlier empirically driven models of urban socio-spatial structure. The empirical and theoretical basis for this generalization technique is also considered. Findings demonstrate critical differences in both the application and development of geodemographics between the United States and United Kingdom resulting from their diverging histories, variable data economies, and availability of academic or free classifications. Finally, current methodological research is reviewed, linking this discussion prospectively to the changing spatial data economy in both the United States and United Kingdom. PMID:25484455

  19. Qualitative and Quantitative Distinctions in Personality Disorder

    PubMed Central

    Wright, Aidan G. C.

    2011-01-01

    The “categorical-dimensional debate” has catalyzed a wealth of empirical advances in the study of personality pathology. However, this debate is merely one articulation of a broader conceptual question regarding whether to define and describe psychopathology as a quantitatively extreme expression of normal functioning or as qualitatively distinct in its process. In this paper I argue that dynamic models of personality (e.g., object-relations, cognitive-affective processing system) offer the conceptual scaffolding to reconcile these seemingly incompatible approaches to characterizing the relationship between normal and pathological personality. I propose that advances in personality assessment that sample behavior and experiences intensively provide the empirical techniques, whereas interpersonal theory offers an integrative theoretical framework, for accomplishing this goal. PMID:22804676

  20. Liquid Alumina: Detailed Atomic Coordination Determined from Neutron Diffraction Data Using Empirical Potential Structure Refinement

    NASA Astrophysics Data System (ADS)

    Landron, C.; Hennet, L.; Jenkins, T. E.; Greaves, G. N.; Coutures, J. P.; Soper, A. K.

    2001-05-01

    The neutron scattering structure factor SN\\(Q\\) for a 40 mg drop of molten alumina ( Al2O3) held at 2500 K, using a laser-heated aerodynamic levitation furnace, is measured for the first time. A 1700 atom model of liquid alumina is generated from these data using the technique of empirical potential structural refinement. About 62% of the aluminum sites are 4-fold coordinated, matching the mostly triply coordinated oxygen sites, but some 24% of the aluminum sites are 5-fold coordinated. The octahedral aluminum sites found in crystalline α-Al2O3 occur only at the 2% level in liquid alumina.

  1. Model improvements and validation of TerraSAR-X precise orbit determination

    NASA Astrophysics Data System (ADS)

    Hackel, S.; Montenbruck, O.; Steigenberger, P.; Balss, U.; Gisinger, C.; Eineder, M.

    2017-05-01

    The radar imaging satellite mission TerraSAR-X requires precisely determined satellite orbits for validating geodetic remote sensing techniques. Since the achieved quality of the operationally derived, reduced-dynamic (RD) orbit solutions limits the capabilities of the synthetic aperture radar (SAR) validation, an effort is made to improve the estimated orbit solutions. This paper discusses the benefits of refined dynamical models on orbit accuracy as well as estimated empirical accelerations and compares different dynamic models in a RD orbit determination. Modeling aspects discussed in the paper include the use of a macro-model for drag and radiation pressure computation, the use of high-quality atmospheric density and wind models as well as the benefit of high-fidelity gravity and ocean tide models. The Sun-synchronous dusk-dawn orbit geometry of TerraSAR-X results in a particular high correlation of solar radiation pressure modeling and estimated normal-direction positions. Furthermore, this mission offers a unique suite of independent sensors for orbit validation. Several parameters serve as quality indicators for the estimated satellite orbit solutions. These include the magnitude of the estimated empirical accelerations, satellite laser ranging (SLR) residuals, and SLR-based orbit corrections. Moreover, the radargrammetric distance measurements of the SAR instrument are selected for assessing the quality of the orbit solutions and compared to the SLR analysis. The use of high-fidelity satellite dynamics models in the RD approach is shown to clearly improve the orbit quality compared to simplified models and loosely constrained empirical accelerations. The estimated empirical accelerations are substantially reduced by 30% in tangential direction when working with the refined dynamical models. Likewise the SLR residuals are reduced from -3 ± 17 to 2 ± 13 mm, and the SLR-derived normal-direction position corrections are reduced from 15 to 6 mm, obtained from the 2012-2014 period. The radar range bias is reduced from -10.3 to -6.1 mm with the updated orbit solutions, which coincides with the reduced standard deviation of the SLR residuals. The improvements are mainly driven by the satellite macro-model for the purpose of solar radiation pressure modeling, improved atmospheric density models, and the use of state-of-the-art gravity field models.

  2. A Robust Geometric Model for Argument Classification

    NASA Astrophysics Data System (ADS)

    Giannone, Cristina; Croce, Danilo; Basili, Roberto; de Cao, Diego

    Argument classification is the task of assigning semantic roles to syntactic structures in natural language sentences. Supervised learning techniques for frame semantics have been recently shown to benefit from rich sets of syntactic features. However argument classification is also highly dependent on the semantics of the involved lexicals. Empirical studies have shown that domain dependence of lexical information causes large performance drops in outside domain tests. In this paper a distributional approach is proposed to improve the robustness of the learning model against out-of-domain lexical phenomena.

  3. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  4. Poisson and negative binomial item count techniques for surveys with sensitive question.

    PubMed

    Tian, Guo-Liang; Tang, Man-Lai; Wu, Qin; Liu, Yin

    2017-04-01

    Although the item count technique is useful in surveys with sensitive questions, privacy of those respondents who possess the sensitive characteristic of interest may not be well protected due to a defect in its original design. In this article, we propose two new survey designs (namely the Poisson item count technique and negative binomial item count technique) which replace several independent Bernoulli random variables required by the original item count technique with a single Poisson or negative binomial random variable, respectively. The proposed models not only provide closed form variance estimate and confidence interval within [0, 1] for the sensitive proportion, but also simplify the survey design of the original item count technique. Most importantly, the new designs do not leak respondents' privacy. Empirical results show that the proposed techniques perform satisfactorily in the sense that it yields accurate parameter estimate and confidence interval.

  5. Efficacy of monitoring and empirical predictive modeling at improving public health protection at Chicago beaches

    USGS Publications Warehouse

    Nevers, Meredith B.; Whitman, Richard L.

    2011-01-01

    Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.

  6. Recent Advances in Model-Assisted Probability of Detection

    NASA Technical Reports Server (NTRS)

    Thompson, R. Bruce; Brasche, Lisa J.; Lindgren, Eric; Swindell, Paul; Winfree, William P.

    2009-01-01

    The increased role played by probability of detection (POD) in structural integrity programs, combined with the significant time and cost associated with the purely empirical determination of POD, provides motivation for alternate means to estimate this important metric of NDE techniques. One approach to make the process of POD estimation more efficient is to complement limited empirical experiments with information from physics-based models of the inspection process or controlled laboratory experiments. The Model-Assisted Probability of Detection (MAPOD) Working Group was formed by the Air Force Research Laboratory, the FAA Technical Center, and NASA to explore these possibilities. Since the 2004 inception of the MAPOD Working Group, 11 meetings have been held in conjunction with major NDE conferences. This paper will review the accomplishments of this group, which includes over 90 members from around the world. Included will be a discussion of strategies developed to combine physics-based and empirical understanding, draft protocols that have been developed to guide application of the strategies, and demonstrations that have been or are being carried out in a number of countries. The talk will conclude with a discussion of future directions, which will include documentation of benefits via case studies, development of formal protocols for engineering practice, as well as a number of specific technical issues.

  7. Bayesian Approaches for Model and Multi-mission Satellites Data Fusion

    NASA Astrophysics Data System (ADS)

    Khaki, M., , Dr; Forootan, E.; Awange, J.; Kuhn, M.

    2017-12-01

    Traditionally, data assimilation is formulated as a Bayesian approach that allows one to update model simulations using new incoming observations. This integration is necessary due to the uncertainty in model outputs, which mainly is the result of several drawbacks, e.g., limitations in accounting for the complexity of real-world processes, uncertainties of (unknown) empirical model parameters, and the absence of high resolution (both spatially and temporally) data. Data assimilation, however, requires knowledge of the physical process of a model, which may be either poorly described or entirely unavailable. Therefore, an alternative method is required to avoid this dependency. In this study we present a novel approach which can be used in hydrological applications. A non-parametric framework based on Kalman filtering technique is proposed to improve hydrological model estimates without using a model dynamics. Particularly, we assesse Kalman-Taken formulations that take advantage of the delay coordinate method to reconstruct nonlinear dynamics in the absence of the physical process. This empirical relationship is then used instead of model equations to integrate satellite products with model outputs. We use water storage variables from World-Wide Water Resources Assessment (W3RA) simulations and update them using data known as the Gravity Recovery And Climate Experiment (GRACE) terrestrial water storage (TWS) and also surface soil moisture data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) over Australia for the period of 2003 to 2011. The performance of the proposed integration method is compared with data obtained from the more traditional assimilation scheme using the Ensemble Square-Root Filter (EnSRF) filtering technique (Khaki et al., 2017), as well as by evaluating them against ground-based soil moisture and groundwater observations within the Murray-Darling Basin.

  8. Mechanistic model to predict colostrum intake based on deuterium oxide dilution technique data and impact of gestation and prefarrowing diets on piglet intake and sow yield of colostrum.

    PubMed

    Theil, P K; Flummer, C; Hurley, W L; Kristensen, N B; Labouriau, R L; Sørensen, M T

    2014-12-01

    The aims of the present study were to quantify colostrum intake (CI) of piglets using the D2O dilution technique, to develop a mechanistic model to predict CI, to compare these data with CI predicted by a previous empirical predictive model developed for bottle-fed piglets, and to study how composition of diets fed to gestating sows affected piglet CI, sow colostrum yield (CY), and colostrum composition. In total, 240 piglets from 40 litters were enriched with D2O. The CI measured by D2O from birth until 24 h after the birth of first-born piglet was on average 443 g (SD 151). Based on measured CI, a mechanistic model to predict CI was developed using piglet characteristics (24-h weight gain [WG; g], BW at birth [BWB; kg], and duration of CI [D; min]: CI, g=-106+2.26 WG+200 BWB+0.111 D-1,414 WG/D+0.0182 WG/BWB (R2=0.944). This model was used to predict the CI for all colostrum suckling piglets within the 40 litters (n=500, mean=437 g, SD=153 g) and was compared with the CI predicted by a previous empirical predictive model (mean=305 g, SD=140 g). The previous empirical model underestimated the CI by 30% compared with that obtained by the new mechanistic model. The sows were fed 1 of 4 gestation diets (n=10 per diet) based on different fiber sources (low fiber [17%] or potato pulp, pectin residue, or sugarbeet pulp [32 to 40%]) from mating until d 108 of gestation. From d 108 of gestation until parturition, sows were fed 1 of 5 prefarrowing diets (n=8 per diet) varying in supplemented fat (3% animal fat, 8% coconut oil, 8% sunflower oil, 8% fish oil, or 4% fish oil+4% octanoic acid). Sows fed diets with pectin residue or sugarbeet pulp during gestation produced colostrum with lower protein, fat, DM, and energy concentrations and higher lactose concentrations, and their piglets had greater CI as compared with sows fed potato pulp or the low-fiber diet (P<0.05), and sows fed pectin residue had a greater CY than potato pulp-fed sows (P<0.05). Prefarrowing diets affected neither CI nor CY, but the prefarrowing diet with coconut oil decreased lactose and increased DM concentrations of colostrum compared with other prefarrowing diets (P<0.05). In conclusion, the new mechanistic predictive model for CI suggests that the previous empirical predictive model underestimates CI of sow-reared piglets by 30%. It was also concluded that nutrition of sows during gestation affected CY and colostrum composition.

  9. Almost sure convergence in quantum spin glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buzinski, David, E-mail: dab197@case.edu; Meckes, Elizabeth, E-mail: elizabeth.meckes@case.edu

    2015-12-15

    Recently, Keating, Linden, and Wells [Markov Processes Relat. Fields 21(3), 537-555 (2015)] showed that the density of states measure of a nearest-neighbor quantum spin glass model is approximately Gaussian when the number of particles is large. The density of states measure is the ensemble average of the empirical spectral measure of a random matrix; in this paper, we use concentration of measure and entropy techniques together with the result of Keating, Linden, and Wells to show that in fact the empirical spectral measure of such a random matrix is almost surely approximately Gaussian itself with no ensemble averaging. We alsomore » extend this result to a spherical quantum spin glass model and to the more general coupling geometries investigated by Erdős and Schröder [Math. Phys., Anal. Geom. 17(3-4), 441–464 (2014)].« less

  10. Empirical Evaluation of Hunk Metrics as Bug Predictors

    NASA Astrophysics Data System (ADS)

    Ferzund, Javed; Ahsan, Syed Nadeem; Wotawa, Franz

    Reducing the number of bugs is a crucial issue during software development and maintenance. Software process and product metrics are good indicators of software complexity. These metrics have been used to build bug predictor models to help developers maintain the quality of software. In this paper we empirically evaluate the use of hunk metrics as predictor of bugs. We present a technique for bug prediction that works at smallest units of code change called hunks. We build bug prediction models using random forests, which is an efficient machine learning classifier. Hunk metrics are used to train the classifier and each hunk metric is evaluated for its bug prediction capabilities. Our classifier can classify individual hunks as buggy or bug-free with 86 % accuracy, 83 % buggy hunk precision and 77% buggy hunk recall. We find that history based and change level hunk metrics are better predictors of bugs than code level hunk metrics.

  11. Empirical evidence for multi-scaled controls on wildfire size distributions in California

    NASA Astrophysics Data System (ADS)

    Povak, N.; Hessburg, P. F., Sr.; Salter, R. B.

    2014-12-01

    Ecological theory asserts that regional wildfire size distributions are examples of self-organized critical (SOC) systems. Controls on SOC event-size distributions by virtue are purely endogenous to the system and include the (1) frequency and pattern of ignitions, (2) distribution and size of prior fires, and (3) lagged successional patterns after fires. However, recent work has shown that the largest wildfires often result from extreme climatic events, and that patterns of vegetation and topography may help constrain local fire spread, calling into question the SOC model's simplicity. Using an atlas of >12,000 California wildfires (1950-2012) and maximum likelihood estimation (MLE), we fit four different power-law models and broken-stick regressions to fire-size distributions across 16 Bailey's ecoregions. Comparisons among empirical fire size distributions across ecoregions indicated that most ecoregion's fire-size distributions were significantly different, suggesting that broad-scale top-down controls differed among ecoregions. One-parameter power-law models consistently fit a middle range of fire sizes (~100 to 10000 ha) across most ecoregions, but did not fit to larger and smaller fire sizes. We fit the same four power-law models to patch size distributions of aspect, slope, and curvature topographies and found that the power-law models fit to a similar middle range of topography patch sizes. These results suggested that empirical evidence may exist for topographic controls on fire sizes. To test this, we used neutral landscape modeling techniques to determine if observed fire edges corresponded with aspect breaks more often than expected by random. We found significant differences between the empirical and neutral models for some ecoregions, particularly within the middle range of fire sizes. Our results, combined with other recent work, suggest that controls on ecoregional fire size distributions are multi-scaled and likely are not purely SOC. California wildfire ecosystems appear to be adaptive, governed by stationary and non-stationary controls, which may be either exogenous or endogenous to the system.

  12. Determination of rotor harmonic blade loads from acoustic measurements

    NASA Technical Reports Server (NTRS)

    Kasper, P. K.

    1975-01-01

    The magnitude of discrete frequency sound radiated by a rotating blade is strongly influenced by the presence of a nonuniform distribution of aerodynamic forces over the rotor disk. An analytical development and experimental results are provided for a technique by which harmonic blade loads are derived from acoustic measurements. The technique relates, on a one-to-one basis, the discrete frequency sound harmonic amplitudes measured at a point on the axis of rotation to the blade-load harmonic amplitudes. This technique was applied to acoustic data from two helicopter types and from a series of test results using the NASA-Langley Research Center rotor test facility. The inferred blade-load harmonics for the cases considered tended to follow an inverse power law relationship with harmonic blade-load number. Empirical curve fits to the data showed the harmonic fall-off rate to be in the range of 6 to 9 db per octave of harmonic order. These empirical relationships were subsequently used as input data in a compatible far field rotational noise prediction model. A comparison between predicted and measured off-axis sound harmonic levels is provided for the experimental cases considered.

  13. Optimization Techniques for Clustering,Connectivity, and Flow Problems in Complex Networks

    DTIC Science & Technology

    2012-10-01

    discrete optimization and for analysis of performance of algorithm portfolios; introducing a metaheuristic framework of variable objective search that...The results of empirical evaluation of the proposed algorithm are also included. 1.3 Theoretical analysis of heuristics and designing new metaheuristic ...analysis of heuristics for inapproximable problems and designing new metaheuristic approaches for the problems of interest; (IV) Developing new models

  14. Applying the Technology Acceptance Model and flow theory to Cyworld user behavior: implication of the Web2.0 user acceptance.

    PubMed

    Shin, Dong-Hee; Kim, Won-Yong; Kim, Won-Young

    2008-06-01

    This study explores attitudinal and behavioral patterns when using Cyworld by adopting an expanded Technology Acceptance Model (TAM). A model for Cyworld acceptance is used to examine how various factors modified from the TAM influence acceptance and its antecedents. This model is examined through an empirical study involving Cyworld users using structural equation modeling techniques. The model shows reasonably good measurement properties and the constructs are validated. The results not only confirm the model but also reveal general factors applicable to Web2.0. A set of constructs in the model can be the Web2.0-specific factors, playing as enhancing factor to attitudes and intention.

  15. Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferson, Scott; Nelsen, Roger B.; Hajagos, Janos

    2015-05-01

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  16. A robust calibration technique for acoustic emission systems based on momentum transfer from a ball drop

    USGS Publications Warehouse

    McLaskey, Gregory C.; Lockner, David A.; Kilgore, Brian D.; Beeler, Nicholas M.

    2015-01-01

    We describe a technique to estimate the seismic moment of acoustic emissions and other extremely small seismic events. Unlike previous calibration techniques, it does not require modeling of the wave propagation, sensor response, or signal conditioning. Rather, this technique calibrates the recording system as a whole and uses a ball impact as a reference source or empirical Green’s function. To correctly apply this technique, we develop mathematical expressions that link the seismic moment $M_{0}$ of internal seismic sources (i.e., earthquakes and acoustic emissions) to the impulse, or change in momentum $\\Delta p $, of externally applied seismic sources (i.e., meteor impacts or, in this case, ball impact). We find that, at low frequencies, moment and impulse are linked by a constant, which we call the force‐moment‐rate scale factor $C_{F\\dot{M}} = M_{0}/\\Delta p$. This constant is equal to twice the speed of sound in the material from which the seismic sources were generated. Next, we demonstrate the calibration technique on two different experimental rock mechanics facilities. The first example is a saw‐cut cylindrical granite sample that is loaded in a triaxial apparatus at 40 MPa confining pressure. The second example is a 2 m long fault cut in a granite sample and deformed in a large biaxial apparatus at lower stress levels. Using the empirical calibration technique, we are able to determine absolute source parameters including the seismic moment, corner frequency, stress drop, and radiated energy of these magnitude −2.5 to −7 seismic events.

  17. Artificial Neural Network L* from different magnetospheric field models

    NASA Astrophysics Data System (ADS)

    Yu, Y.; Koller, J.; Zaharia, S. G.; Jordanova, V. K.

    2011-12-01

    The third adiabatic invariant L* plays an important role in modeling and understanding the radiation belt dynamics. The popular way to numerically obtain the L* value follows the recipe described by Roederer [1970], which is, however, slow and computational expensive. This work focuses on a new technique, which can compute the L* value in microseconds without losing much accuracy: artificial neural networks. Since L* is related to the magnetic flux enclosed by a particle drift shell, global magnetic field information needed to trace the drift shell is required. A series of currently popular empirical magnetic field models are applied to create the L* data pool using 1 million data samples which are randomly selected within a solar cycle and within the global magnetosphere. The networks, trained from the above L* data pool, can thereby be used for fairly efficient L* calculation given input parameters valid within the trained temporal and spatial range. Besides the empirical magnetospheric models, a physics-based self-consistent inner magnetosphere model (RAM-SCB) developed at LANL is also utilized to calculate L* values and then to train the L* neural network. This model better predicts the magnetospheric configuration and therefore can significantly improve the L*. The above neural network L* technique will enable, for the first time, comprehensive solar-cycle long studies of radiation belt processes. However, neural networks trained from different magnetic field models can result in different L* values, which could cause mis-interpretation of radiation belt dynamics, such as where the source of the radiation belt charged particle is and which mechanism is dominant in accelerating the particles. Such a fact calls for attention to cautiously choose a magnetospheric field model for the L* calculation.

  18. Empirical Data Collection and Analysis Using Camtasia and Transana

    ERIC Educational Resources Information Center

    Thorsteinsson, Gisli; Page, Tom

    2009-01-01

    One of the possible techniques for collecting empirical data is video recordings of a computer screen with specific screen capture software. This method for collecting empirical data shows how students use the BSCWII (Be Smart Cooperate Worldwide--a web based collaboration/groupware environment) to coordinate their work and collaborate in…

  19. Predictive time-series modeling using artificial neural networks for Linac beam symmetry: an empirical study.

    PubMed

    Li, Qiongge; Chan, Maria F

    2017-01-01

    Over half of cancer patients receive radiotherapy (RT) as partial or full cancer treatment. Daily quality assurance (QA) of RT in cancer treatment closely monitors the performance of the medical linear accelerator (Linac) and is critical for continuous improvement of patient safety and quality of care. Cumulative longitudinal QA measurements are valuable for understanding the behavior of the Linac and allow physicists to identify trends in the output and take preventive actions. In this study, artificial neural networks (ANNs) and autoregressive moving average (ARMA) time-series prediction modeling techniques were both applied to 5-year daily Linac QA data. Verification tests and other evaluations were then performed for all models. Preliminary results showed that ANN time-series predictive modeling has more advantages over ARMA techniques for accurate and effective applicability in the dosimetry and QA field. © 2016 New York Academy of Sciences.

  20. Solar flare ionization in the mesosphere observed by coherent-scatter radar

    NASA Technical Reports Server (NTRS)

    Parker, J. W.; Bowhill, S. A.

    1986-01-01

    The coherent-scatter technique, as used with the Urbana radar, is able to measure relative changes in electron density at one altitude during the progress of a solar flare when that altitude contains a statistically steady turbulent layer. This work describes the analysis of Urbana coherent-scatter data from the times of 13 solar flares in the period from 1978 to 1983. Previous methods of measuring electron density changes in the D-region are summarized. Models of X-ray spectra, photoionization rates, and ion-recombination reaction schemes are reviewed. The coherent-scatter technique is briefly described, and a model is developed which relates changes in scattered power to changes in electron density. An analysis technique is developed using X-ray flux data from geostationary satellites and coherent scatter data from the Urbana radar which empirically distinguishes between proposed D-region ion-chemical schemes, and estimates the nonflare ion-pair production rate.

  1. Structural equation modeling in pediatric psychology: overview and review of applications.

    PubMed

    Nelson, Timothy D; Aylward, Brandon S; Steele, Ric G

    2008-08-01

    To describe the use of structural equation modeling (SEM) in the Journal of Pediatric Psychology (JPP) and to discuss the usefulness of SEM applications in pediatric psychology research. The use of SEM in JPP between 1997 and 2006 was examined and compared to leading journals in clinical psychology, clinical child psychology, and child development. SEM techniques were used in <4% of the empirical articles appearing in JPP between 1997 and 2006. SEM was used less frequently in JPP than in other clinically relevant journals over the past 10 years. However, results indicated a recent increase in JPP studies employing SEM techniques. SEM is an under-utilized class of techniques within pediatric psychology research, although investigations employing these methods are becoming more prevalent. Despite its infrequent use to date, SEM is a potentially useful tool for advancing pediatric psychology research with a number of advantages over traditional statistical methods.

  2. The organization of irrational beliefs in posttraumatic stress symptomology: testing the predictions of REBT theory using structural equation modelling.

    PubMed

    Hyland, Philip; Shevlin, Mark; Adamson, Gary; Boduszek, Daniel

    2014-01-01

    This study directly tests a central prediction of rational emotive behaviour therapy (REBT) that has received little empirical attention regarding the core and intermediate beliefs in the development of posttraumatic stress symptoms. A theoretically consistent REBT model of posttraumatic stress disorder (PTSD) was examined using structural equation modelling techniques among a sample of 313 trauma-exposed military and law enforcement personnel. The REBT model of PTSD provided a good fit of the data, χ(2) = 599.173, df = 356, p < .001; standardized root mean square residual = .05 (confidence interval = .04-.05); standardized root mean square residual = .04; comparative fit index = .95; Tucker Lewis index = .95. Results demonstrated that demandingness beliefs indirectly affected the various symptom groups of PTSD through a set of secondary irrational beliefs that include catastrophizing, low frustration tolerance, and depreciation beliefs. Results were consistent with the predictions of REBT theory and provides strong empirical support that the cognitive variables described by REBT theory are critical cognitive constructs in the prediction of PTSD symptomology. © 2013 Wiley Periodicals, Inc.

  3. Evidence-based hypnotherapy for depression.

    PubMed

    Alladin, Assen

    2010-04-01

    Cognitive hypnotherapy (CH) is a comprehensive evidence-based hypnotherapy for clinical depression. This article describes the major components of CH, which integrate hypnosis with cognitive-behavior therapy as the latter provides an effective host theory for the assimilation of empirically supported treatment techniques derived from various theoretical models of psychotherapy and psychopathology. CH meets criteria for an assimilative model of psychotherapy, which is considered to be an efficacious model of psychotherapy integration. The major components of CH for depression are described in sufficient detail to allow replication, verification, and validation of the techniques delineated. CH for depression provides a template that clinicians and investigators can utilize to study the additive effects of hypnosis in the management of other psychological or medical disorders. Evidence-based hypnotherapy and research are encouraged; such a movement is necessary if clinical hypnosis is to integrate into mainstream psychotherapy.

  4. Torque Transient of Magnetically Drive Flow for Viscosity Measurement

    NASA Technical Reports Server (NTRS)

    Ban, Heng; Li, Chao; Su, Ching-Hua; Lin, Bochuan; Scripa, Rosalia N.; Lehoczky, Sandor L.

    2004-01-01

    Viscosity is a good indicator of structural changes for complex liquids, such as semiconductor melts with chain or ring structures. This paper discusses the theoretical and experimental results of the transient torque technique for non-intrusive viscosity measurement. Such a technique is essential for the high temperature viscosity measurement of high pressure and toxic semiconductor melts. In this paper, our previous work on oscillating cup technique was expanded to the transient process of a magnetically driven melt flow in a damped oscillation system. Based on the analytical solution for the fluid flow and cup oscillation, a semi-empirical model was established to extract the fluid viscosity. The analytical and experimental results indicated that such a technique has the advantage of short measurement time and straight forward data analysis procedures

  5. Factors influencing suspended solids concentrations in activated sludge settling tanks.

    PubMed

    Kim, Y; Pipes, W O

    1999-05-31

    A significant fraction of the total mass of sludge in an activated sludge process may be in the settling tanks if the sludge has a high sludge volume index (SVI) or when a hydraulic overload occurs during a rainstorm. Under those conditions, an accurate estimate of the amount of sludge in the settling tanks is needed in order to calculate the mean cell residence time or to determine the capacity of the settling tanks to store sludge. Determination of the amount of sludge in the settling tanks requires estimation of the average concentration of suspended solids in the layer of sludge (XSB) in the bottom of the settling tanks. A widely used reference recommends averaging the concentrations of suspended solids in the mixed liquor (X) and in the underflow (Xu) from the settling tanks (XSB=0. 5{X+Xu}). This method does not take into consideration other pertinent information available to an operator. This is a report of a field study which had the objective of developing a more accurate method for estimation of the XSB in the bottom of the settling tanks. By correlation analysis, it was found that only 44% of the variation in the measured XSB is related to sum of X and Xu. XSB is also influenced by the SVI, the zone settling velocity at X and the overflow and underflow rates of the settling tanks. The method of averaging X and Xu tends to overestimate the XSB. A new empirical estimation technique for XSB was developed. The estimation technique uses dimensionless ratios; i.e., the ratio of XSB to Xu, the ratio of the overflow rate to the sum of the underflow rate and the initial settling velocity of the mixed liquor and sludge compaction expressed as a ratio (dimensionless SVI). The empirical model is compared with the method of averaging X and Xu for the entire range of sludge depths in the settling tanks and for SVI values between 100 and 300 ml/g. Since the empirical model uses dimensionless ratios, the regression parameters are also dimensionless and the model can be readily adopted for other activated sludge processes. A simplified version of the empirical model provides an estimation of XSB as a function of X, Xu and SVf and can be used by an operator when flow conditions are normal. Copyright 1999 Elsevier Science B.V.

  6. Chemical Sensor Array Response Modeling Using Quantitative Structure-Activity Relationships Technique

    NASA Astrophysics Data System (ADS)

    Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.

    We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.

  7. Can high resolution topographic surveys provide reliable grain size estimates?

    NASA Astrophysics Data System (ADS)

    Pearson, Eleanor; Smith, Mark; Klaar, Megan; Brown, Lee

    2017-04-01

    High resolution topographic surveys contain a wealth of information that is not always exploited in the generation of Digital Elevation Models (DEMs). In particular, several authors have related sub-grid scale topographic variability (or 'surface roughness') to particle grain size by deriving empirical relationships between the two. Such relationships would permit rapid analysis of the spatial distribution of grain size over entire river reaches, providing data to drive distributed hydraulic models and revolutionising monitoring of river restoration projects. However, comparison of previous roughness-grain-size relationships shows substantial variability between field sites and do not take into account differences in patch-scale facies. This study explains this variability by identifying the factors that influence roughness-grain-size relationships. Using 275 laboratory and field-based Structure-from-Motion (SfM) surveys, we investigate the influence of: inherent survey error; irregularity of natural gravels; particle shape; grain packing structure; sorting; and form roughness on roughness-grain-size relationships. A suite of empirical relationships is presented in the form of a decision tree which improves estimations of grain size. Results indicate that the survey technique itself is capable of providing accurate grain size estimates. By accounting for differences in patch facies, R2 was seen to improve from 0.769 to R2 > 0.9 for certain facies. However, at present, the method is unsuitable for poorly sorted gravel patches. In future, a combination of a surface roughness proxy with photosieving techniques using SfM-derived orthophotos may offer improvements on using either technique individually.

  8. Impact of orbit modeling on DORIS station position and Earth rotation estimates

    NASA Astrophysics Data System (ADS)

    Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav

    2014-04-01

    The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.

  9. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    PubMed

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.

  10. Analytical transmissibility based transfer path analysis for multi-energy-domain systems using four-pole parameter theory

    NASA Astrophysics Data System (ADS)

    Mashayekhi, Mohammad Jalali; Behdinan, Kamran

    2017-10-01

    The increasing demand to minimize undesired vibration and noise levels in several high-tech industries has generated a renewed interest in vibration transfer path analysis. Analyzing vibration transfer paths within a system is of crucial importance in designing an effective vibration isolation strategy. Most of the existing vibration transfer path analysis techniques are empirical which are suitable for diagnosis and troubleshooting purpose. The lack of an analytical transfer path analysis to be used in the design stage is the main motivation behind this research. In this paper an analytical transfer path analysis based on the four-pole theory is proposed for multi-energy-domain systems. Bond graph modeling technique which is an effective approach to model multi-energy-domain systems is used to develop the system model. In this paper an electro-mechanical system is used as a benchmark example to elucidate the effectiveness of the proposed technique. An algorithm to obtain the equivalent four-pole representation of a dynamical systems based on the corresponding bond graph model is also presented in this paper.

  11. Cooperative Factors, Cooperative Innovation Effect and Innovation Performance for Chinese Firms: an Empirical Study

    NASA Astrophysics Data System (ADS)

    Xie, Xuemei

    Based on a survey to 1206 Chinese firms, this paper empirically explores the factors impacting cooperative innovation effect of firms, and seeks to explore the relationship between cooperative innovation effect (CIE) and innovation performance using the technique of Structural Equation Modeling (SEM). The study finds there are significant positive relationships between basic sustaining factors, factors of government and policy, factors of cooperation mechanism and social network, and cooperative innovation effect. However, the result reveals that factors of government and policy demonstrate little impact on the CIE of firms compared with other factors. It is hoped that the findings can pave the way for future studies in improving cooperative innovation capacity for firms in emerging countries.

  12. Soil Moisture Estimate under Forest using a Semi-empirical Model at P-Band

    NASA Astrophysics Data System (ADS)

    Truong-Loi, M.; Saatchi, S.; Jaruwatanadilok, S.

    2013-12-01

    In this paper we show the potential of a semi-empirical algorithm to retrieve soil moisture under forests using P-band polarimetric SAR data. In past decades, several remote sensing techniques have been developed to estimate the surface soil moisture. In most studies associated with radar sensing of soil moisture, the proposed algorithms are focused on bare or sparsely vegetated surfaces where the effect of vegetation can be ignored. At long wavelengths such as L-band, empirical or physical models such as the Small Perturbation Model (SPM) provide reasonable estimates of surface soil moisture at depths of 0-5cm. However for densely covered vegetated surfaces such as forests, the problem becomes more challenging because the vegetation canopy is a complex scattering environment. For this reason there have been only few studies focusing on retrieving soil moisture under vegetation canopy in the literature. Moghaddam et al. developed an algorithm to estimate soil moisture under a boreal forest using L- and P-band SAR data. For their studied area, double-bounce between trunks and ground appear to be the most important scattering mechanism. Thereby, they implemented parametric models of radar backscatter for double-bounce using simulations of a numerical forest scattering model. Hajnsek et al. showed the potential of estimating the soil moisture under agricultural vegetation using L-band polarimetric SAR data and using polarimetric-decomposition techniques to remove the vegetation layer. Here we use an approach based on physical formulation of dominant scattering mechanisms and three parameters that integrates the vegetation and soil effects at long wavelengths. The algorithm is a simplification of a 3-D coherent model of forest canopy based on the Distorted Born Approximation (DBA). The simplified model has three equations and three unknowns, preserving the three dominant scattering mechanisms of volume, double-bounce and surface for three polarized backscattering coefficients: σHH, σVV and σHV. The inversion process, which is not an ill-posed problem, uses the non-linear optimization method of Levenberg-Marquardt and estimates the three model parameters: vegetation aboveground biomass, average soil moisture and surface roughness. The model analytical formulation will be first recalled and sensitivity analyses will be shown. Then some results obtained with real SAR data will be presented and compared to ground estimates.

  13. Computation of bedrock-aquifer recharge in northern Westchester County, New York, and chemical quality of water from selected bedrock wells

    USGS Publications Warehouse

    Wolcott, Stephen W.; Snow, Robert F.

    1995-01-01

    An empirical technique was used to calculate the recharge to bedrock aquifers in northern Westchester County. This method requires delineation of ground-water divides within the aquifer area and values for (1) the extent of till and exposed bedrock within the aquifer area, and (2) mean annual runoff. This report contains maps and data needed for calculation of recharge in any given area within the 165square-mile study area. Recharge was computed by this technique for a 93-square-mile part of the study area and used a ground-water-flow model to evaluate the reliability of the method. A two-layer, steady-state model of the selected area was calibrated. The area consists predominantly of bedrock overlain by small localized deposits of till and stratified drill Ground-water-level and streamflow data collected in mid-November 1987 were used for model calibration. The data set approximates average annual conditions. The model was calibrated from (1) estimates of recharge as computed through the empirical technique, and (2) a range of values for hydrologic properties derived from aquifer tests and published literature. Recharge values used for model simulation appear to be reasonable for average steady-state conditions. Water-quality data were collected from 53 selected bedrock wells throughout northern Westchester County to define the background ground-water quality. The constituents and properties for which samples were analyzed included major cations and anions, temperature, pH, specific conductance, and hardness. Results indicate little difference in water quality among the bedrock aquifers within the study area. Ground water is mainly the calcium-bicarbonate type and is moderately hard. Average concentrations of sodium, sulfate, chloride, nitrate, iron, and manganese were within acceptable limits established by the U.S. Environmental Protection Agency for domestic water supply.

  14. Empirical simulations of materials

    NASA Astrophysics Data System (ADS)

    Jogireddy, Vasantha

    2011-12-01

    Molecular dynamics is a specialized discipline of molecular modelling and computer techniques. In this work, first we presented simulation results from a study carried out on silicon nanowires. In the second part of the work, we presented an electrostatic screened coulomb potential developed for studying metal alloys and metal oxides. In particular, we have studied aluminum-copper alloys, aluminum oxides and copper oxides. Parameter optimization for the potential is done using multiobjective optimization algorithms.

  15. Response of space shuttle insulation panels to acoustic noise pressure

    NASA Technical Reports Server (NTRS)

    Vaicaitis, R.

    1976-01-01

    The response of reusable space shuttle insulation panels to random acoustic pressure fields are studied. The basic analytical approach in formulating the governing equations of motion uses a Rayleigh-Ritz technique. The input pressure field is modeled as a stationary Gaussian random process for which the cross-spectral density function is known empirically from experimental measurements. The response calculations are performed in both frequency and time domain.

  16. A scale-based approach to interdisciplinary research and expertise in sports.

    PubMed

    Ibáñez-Gijón, Jorge; Buekers, Martinus; Morice, Antoine; Rao, Guillaume; Mascret, Nicolas; Laurin, Jérome; Montagne, Gilles

    2017-02-01

    After more than 20 years since the introduction of ecological and dynamical approaches in sports research, their promising opportunity for interdisciplinary research has not been fulfilled yet. The complexity of the research process and the theoretical and empirical difficulties associated with an integrated ecological-dynamical approach have been the major factors hindering the generalisation of interdisciplinary projects in sports sciences. To facilitate this generalisation, we integrate the major concepts from the ecological and dynamical approaches to study behaviour as a multi-scale process. Our integration gravitates around the distinction between functional (ecological) and execution (organic) scales, and their reciprocal intra- and inter-scale constraints. We propose an (epistemological) scale-based definition of constraints that accounts for the concept of synergies as emergent coordinative structures. To illustrate how we can operationalise the notion of multi-scale synergies we use an interdisciplinary model of locomotor pointing. To conclude, we show the value of this approach for interdisciplinary research in sport sciences, as we discuss two examples of task-specific dimensionality reduction techniques in the context of an ongoing project that aims to unveil the determinants of expertise in basketball free throw shooting. These techniques provide relevant empirical evidence to help bootstrap the challenging modelling efforts required in sport sciences.

  17. Learning temporal rules to forecast instability in continuously monitored patients

    PubMed Central

    Dubrawski, Artur; Wang, Donghan; Hravnak, Marilyn; Clermont, Gilles; Pinsky, Michael R

    2017-01-01

    Inductive machine learning, and in particular extraction of association rules from data, has been successfully used in multiple application domains, such as market basket analysis, disease prognosis, fraud detection, and protein sequencing. The appeal of rule extraction techniques stems from their ability to handle intricate problems yet produce models based on rules that can be comprehended by humans, and are therefore more transparent. Human comprehension is a factor that may improve adoption and use of data-driven decision support systems clinically via face validity. In this work, we explore whether we can reliably and informatively forecast cardiorespiratory instability (CRI) in step-down unit (SDU) patients utilizing data from continuous monitoring of physiologic vital sign (VS) measurements. We use a temporal association rule extraction technique in conjunction with a rule fusion protocol to learn how to forecast CRI in continuously monitored patients. We detail our approach and present and discuss encouraging empirical results obtained using continuous multivariate VS data from the bedside monitors of 297 SDU patients spanning 29 346 hours (3.35 patient-years) of observation. We present example rules that have been learned from data to illustrate potential benefits of comprehensibility of the extracted models, and we analyze the empirical utility of each VS as a potential leading indicator of an impending CRI event. PMID:27274020

  18. Coupling of ab initio density functional theory and molecular dynamics for the multiscale modeling of carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Ng, T. Y.; Yeak, S. H.; Liew, K. M.

    2008-02-01

    A multiscale technique is developed that couples empirical molecular dynamics (MD) and ab initio density functional theory (DFT). An overlap handshaking region between the empirical MD and ab initio DFT regions is formulated and the interaction forces between the carbon atoms are calculated based on the second-generation reactive empirical bond order potential, the long-range Lennard-Jones potential as well as the quantum-mechanical DFT derived forces. A density of point algorithm is also developed to track all interatomic distances in the system, and to activate and establish the DFT and handshaking regions. Through parallel computing, this multiscale method is used here to study the dynamic behavior of single-walled carbon nanotubes (SWCNTs) under asymmetrical axial compression. The detection of sideways buckling due to the asymmetrical axial compression is reported and discussed. It is noted from this study on SWCNTs that the MD results may be stiffer compared to those with electron density considerations, i.e. first-principle ab initio methods.

  19. Benchmarking test of empirical root water uptake models

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman

    2017-01-01

    Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.

  20. Knowledge discovery in cardiology: A systematic literature review.

    PubMed

    Kadi, I; Idri, A; Fernandez-Aleman, J L

    2017-01-01

    Data mining (DM) provides the methodology and technology needed to transform huge amounts of data into useful information for decision making. It is a powerful process employed to extract knowledge and discover new patterns embedded in large data sets. Data mining has been increasingly used in medicine, particularly in cardiology. In fact, DM applications can greatly benefit all those involved in cardiology, such as patients, cardiologists and nurses. The purpose of this paper is to review papers concerning the application of DM techniques in cardiology so as to summarize and analyze evidence regarding: (1) the DM techniques most frequently used in cardiology; (2) the performance of DM models in cardiology; (3) comparisons of the performance of different DM models in cardiology. We performed a systematic literature review of empirical studies on the application of DM techniques in cardiology published in the period between 1 January 2000 and 31 December 2015. A total of 149 articles published between 2000 and 2015 were selected, studied and analyzed according to the following criteria: DM techniques and performance of the approaches developed. The results obtained showed that a significant number of the studies selected used classification and prediction techniques when developing DM models. Neural networks, decision trees and support vector machines were identified as being the techniques most frequently employed when developing DM models in cardiology. Moreover, neural networks and support vector machines achieved the highest accuracy rates and were proved to be more efficient than other techniques. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Regional TEC model under quiet geomagnetic conditions and low-to-moderate solar activity based on CODE GIMs

    NASA Astrophysics Data System (ADS)

    Feng, Jiandi; Jiang, Weiping; Wang, Zhengtao; Zhao, Zhenzhen; Nie, Linjuan

    2017-08-01

    Global empirical total electron content (TEC) models based on TEC maps effectively describe the average behavior of the ionosphere. However, the accuracy of these global models for a certain region may not be ideal. Due to the number and distribution of the International GNSS Service (IGS) stations, the accuracy of TEC maps is geographically different. The modeling database derived from the global TEC maps with different accuracy is likely one of the main reasons that limits the accuracy of the new models. Moreover, many anomalies in the ionosphere are geographic or geomagnetic dependent, and as such the accuracy of global models can deteriorate if these anomalies are not fully incorporated into the modeling approach. For regional models built in small areas, these influences on modeling are immensely weakened. Thus, the regional TEC models may better reflect the temporal and spatial variations of TEC. In our previous work (Feng et al., 2016), a regional TEC model TECM-NEC is proposed for northeast China. However, this model is only directed against the typical region of Mid-latitude Summer Nighttime Anomaly (MSNA) occurrence, which is meaningless in other regions without MSNA. Following the technique of TECM-NEC model, this study proposes another regional empirical TEC model for other regions in mid-latitudes. Taking a small area BeiJing-TianJin-Tangshan (JJT) region (37.5°-42.5° N, 115°-120° E) in China as an example, a regional empirical TEC model (TECM-JJT) is proposed using the TEC grid data from January 1, 1999 to June 30, 2015 provided by the Center for Orbit Determination in Europe (CODE) under quiet geomagnetic conditions. The TECM-JJT model fits the input CODE TEC data with a bias of 0.11TECU and a root mean square error of 3.26TECU. Result shows that the regional model TECM-JJT is consistent with CODE TEC data and GPS-TEC data.

  2. A novel time of arrival estimation algorithm using an energy detector receiver in MMW systems

    NASA Astrophysics Data System (ADS)

    Liang, Xiaolin; Zhang, Hao; Lyu, Tingting; Xiao, Han; Gulliver, T. Aaron

    2017-12-01

    This paper presents a new time of arrival (TOA) estimation technique using an improved energy detection (ED) receiver based on the empirical mode decomposition (EMD) in an impulse radio (IR) 60 GHz millimeter wave (MMW) system. A threshold is employed via analyzing the characteristics of the received energy values with an extreme learning machine (ELM). The effect of the channel and integration period on the TOA estimation is evaluated. Several well-known ED-based TOA algorithms are used to compare with the proposed technique. It is shown that this ELM-based technique has lower TOA estimation error compared to other approaches and provides robust performance with the IEEE 802.15.3c channel models.

  3. Understanding and Modeling Teams As Dynamical Systems

    PubMed Central

    Gorman, Jamie C.; Dunbar, Terri A.; Grimm, David; Gipson, Christina L.

    2017-01-01

    By its very nature, much of teamwork is distributed across, and not stored within, interdependent people working toward a common goal. In this light, we advocate a systems perspective on teamwork that is based on general coordination principles that are not limited to cognitive, motor, and physiological levels of explanation within the individual. In this article, we present a framework for understanding and modeling teams as dynamical systems and review our empirical findings on teams as dynamical systems. We proceed by (a) considering the question of why study teams as dynamical systems, (b) considering the meaning of dynamical systems concepts (attractors; perturbation; synchronization; fractals) in the context of teams, (c) describe empirical studies of team coordination dynamics at the perceptual-motor, cognitive-behavioral, and cognitive-neurophysiological levels of analysis, and (d) consider the theoretical and practical implications of this approach, including new kinds of explanations of human performance and real-time analysis and performance modeling. Throughout our discussion of the topics we consider how to describe teamwork using equations and/or modeling techniques that describe the dynamics. Finally, we consider what dynamical equations and models do and do not tell us about human performance in teams and suggest future research directions in this area. PMID:28744231

  4. Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.

    2017-12-01

    We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.

  5. SEMI-EMPIRICAL MODELING OF THE PHOTOSPHERE, CHROMOPSHERE, TRANSITION REGION, AND CORONA OF THE M-DWARF HOST STAR GJ 832

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontenla, J. M.; Linsky, Jeffrey L.; Witbrod, Jesse

    Stellar radiation from X-rays to the visible provides the energy that controls the photochemistry and mass loss from exoplanet atmospheres. The important extreme ultraviolet (EUV) region (10–91.2 nm) is inaccessible and should be computed from a reliable stellar model. It is essential to understand the formation regions and physical processes responsible for the various stellar emission features to predict how the spectral energy distribution varies with age and activity levels. We compute a state-of-the-art semi-empirical atmospheric model and the emergent high-resolution synthetic spectrum of the moderately active M2 V star GJ 832 as the first of a series of modelsmore » for stars with different activity levels. We construct a one-dimensional simple model for the physical structure of the star’s chromosphere, chromosphere-corona transition region, and corona using non-LTE radiative transfer techniques and many molecular lines. The synthesized spectrum for this model fits the continuum and lines across the UV-to-optical spectrum. Particular emphasis is given to the emission lines at wavelengths that are shorter than 300 nm observed with the Hubble Space Telescope , which have important effects on the photochemistry of the exoplanet atmospheres. The FUV line ratios indicate that the transition region of GJ 832 is more biased to hotter material than that of the quiet Sun. The excellent agreement of our computed EUV luminosity with that obtained by two other techniques indicates that our model predicts reliable EUV emission from GJ 832. We find that the unobserved EUV flux of GJ 832, which heats the outer atmospheres of exoplanets and drives their mass loss, is comparable to the active Sun.« less

  6. Performance Analysis of Garbage Collection and Dynamic Reordering in a Lisp System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Llames, Rene Lim

    1991-01-01

    Generation based garbage collection and dynamic reordering of objects are two techniques for improving the efficiency of memory management in Lisp and similar dynamic language systems. An analysis of the effect of generation configuration is presented, focusing on the effect of a number of generations and generation capabilities. Analytic timing and survival models are used to represent garbage collection runtime and to derive structural results on its behavior. The survival model provides bounds on the age of objects surviving a garbage collection at a particular level. Empirical results show that execution time is most sensitive to the capacity of the youngest generation. A technique called scanning for transport statistics, for evaluating the effectiveness of reordering independent of main memory size, is presented.

  7. Transformational leadership in the consumer service workgroup: competing models of job satisfaction, change commitment, and cooperative conflict resolution.

    PubMed

    Yang, Yi-Feng

    2014-02-01

    This paper discusses the effects of transformational leadership on cooperative conflict resolution (management) by evaluating several alternative models related to the mediating role of job satisfaction and change commitment. Samples of data from customer service personnel in Taiwan were analyzed. Based on the bootstrap sample technique, an empirical study was carried out to yield the best fitting model. The procedure of hierarchical nested model analysis was used, incorporating the methods of bootstrapping mediation, PRODCLIN2, and structural equation modeling (SEM) comparison. The analysis suggests that leadership that promotes integration (change commitment) and provides inspiration and motivation (job satisfaction), in the proper order, creates the means for cooperative conflict resolution.

  8. Nonlinear model-order reduction for compressible flow solvers using the Discrete Empirical Interpolation Method

    NASA Astrophysics Data System (ADS)

    Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis

    2016-11-01

    Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.

  9. Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY11 Status Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulisek, Jonathan A.; Anderson, Kevin K.; Bowyer, Sonya M.

    2011-09-30

    Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertaintymore » considerably lower than the approximately 10% typical of today's confirmatory assay methods. This document is a progress report for FY2011 PNNL analysis and algorithm development. Progress made by PNNL in FY2011 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model, which accounts for self-shielding effects using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the true self-shielding functions of the used fuel assembly models. The potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space was demonstrated. Also, in FY2011, PNNL continued to develop an analytical model. Such efforts included the addition of six more non-fissile absorbers in the analytical shielding function and the non-uniformity of the neutron flux across the LSDS assay chamber. A hybrid analytical-empirical approach was developed to determine the mass of total Pu (sum of the masses of 239Pu, 240Pu, and 241Pu), which is an important quantity in safeguards. Results using this hybrid method were of approximately the same accuracy as the pure empirical approach. In addition, total Pu with much better accuracy with the hybrid approach than the pure analytical approach. In FY2012, PNNL will continue efforts to optimize its empirical model and minimize its reliance on calibration data. In addition, PNNL will continue to develop an analytical model, considering effects such as neutron-scattering in the fuel and cladding, as well as neutrons streaming through gaps between fuel pins in the fuel assembly.« less

  10. Testing adaptive toolbox models: a Bayesian hierarchical approach.

    PubMed

    Scheibehenne, Benjamin; Rieskamp, Jörg; Wagenmakers, Eric-Jan

    2013-01-01

    Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.

  11. Defying Intuition: Demonstrating the Importance of the Empirical Technique.

    ERIC Educational Resources Information Center

    Kohn, Art

    1992-01-01

    Describes a classroom activity featuring a simple stay-switch probability game. Contends that the exercise helps students see the importance of empirically validating beliefs. Includes full instructions for conducting and discussing the exercise. (CFR)

  12. UTOPIAN: user-driven topic modeling based on interactive nonnegative matrix factorization.

    PubMed

    Choo, Jaegul; Lee, Changhyun; Reddy, Chandan K; Park, Haesun

    2013-12-01

    Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.

  13. Multi-Spacecraft 3D differential emission measure tomography of the solar corona: STEREO results.

    NASA Astrophysics Data System (ADS)

    Vásquez, A. M.; Frazin, R. A.

    We have recently developed a novel technique (called DEMT) for the em- pirical determination of the three-dimensional (3D) distribution of the so- lar corona differential emission measure through multi-spacecraft solar ro- tational tomography of extreme-ultaviolet (EUV) image time series (like those provided by EIT/SOHO and EUVI/STEREO). The technique allows, for the first time, to develop global 3D empirical maps of the coronal elec- tron temperature and density, in the height range 1.0 to 1.25 RS . DEMT constitutes a simple and powerful 3D analysis tool that obviates the need for structure specific modeling.

  14. Integrating Empirical-Modeling Approaches to Improve Understanding of Terrestrial Ecology Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCarthy, Heather; Luo, Yiqi; Wullschleger, Stan D

    Recent decades have seen tremendous increases in the quantity of empirical ecological data collected by individual investigators, as well as through research networks such as FLUXNET (Baldocchi et al., 2001). At the same time, advances in computer technology have facilitated the development and implementation of large and complex land surface and ecological process models. Separately, each of these information streams provides useful, but imperfect information about ecosystems. To develop the best scientific understanding of ecological processes, and most accurately predict how ecosystems may cope with global change, integration of empirical and modeling approaches is necessary. However, true integration - inmore » which models inform empirical research, which in turn informs models (Fig. 1) - is not yet common in ecological research (Luo et al., 2011). The goal of this workshop, sponsored by the Department of Energy, Office of Science, Biological and Environmental Research (BER) program, was to bring together members of the empirical and modeling communities to exchange ideas and discuss scientific practices for increasing empirical - model integration, and to explore infrastructure and/or virtual network needs for institutionalizing empirical - model integration (Yiqi Luo, University of Oklahoma, Norman, OK, USA). The workshop included presentations and small group discussions that covered topics ranging from model-assisted experimental design to data driven modeling (e.g. benchmarking and data assimilation) to infrastructure needs for empirical - model integration. Ultimately, three central questions emerged. How can models be used to inform experiments and observations? How can experimental and observational results be used to inform models? What are effective strategies to promote empirical - model integration?« less

  15. GPS-Based Reduced Dynamic Orbit Determination Using Accelerometer Data

    NASA Technical Reports Server (NTRS)

    VanHelleputte, Tom; Visser, Pieter

    2007-01-01

    Currently two gravity field satellite missions, CHAMP and GRACE, are equipped with high sensitivity electrostatic accelerometers, measuring the non-conservative forces acting on the spacecraft in three orthogonal directions. During the gravity field recovery these measurements help to separate gravitational and non-gravitational contributions in the observed orbit perturbations. For precise orbit determination purposes all these missions have a dual-frequency GPS receiver on board. The reduced dynamic technique combines the dense and accurate GPS observations with physical models of the forces acting on the spacecraft, complemented by empirical accelerations, which are stochastic parameters adjusted in the orbit determination process. When the spacecraft carries an accelerometer, these measured accelerations can be used to replace the models of the non-conservative forces, such as air drag and solar radiation pressure. This approach is implemented in a batch least-squares estimator of the GPS High Precision Orbit Determination Software Tools (GHOST), developed at DLR/GSOC and DEOS. It is extensively tested with data of the CHAMP and GRACE satellites. As accelerometer observations typically can be affected by an unknown scale factor and bias in each measurement direction, they require calibration during processing. Therefore the estimated state vector is augmented with six parameters: a scale and bias factor for the three axes. In order to converge efficiently to a good solution, reasonable a priori values for the bias factor are necessary. These are calculated by combining the mean value of the accelerometer observations with the mean value of the non-conservative force models and empirical accelerations, estimated when using these models. When replacing the non-conservative force models with accelerometer observations and still estimating empirical accelerations, a good orbit precision is achieved. 100 days of GRACE B data processing results in a mean orbit fit of a few centimeters with respect to high-quality JPL reference orbits. This shows a slightly better consistency compared to the case when using force models. A purely dynamic orbit, without estimating empirical accelerations thus only adjusting six state parameters and the bias and scale factors, gives an orbit fit for the GRACE B test case below the decimeter level. The in orbit calibrated accelerometer observations can be used to validate the modelled accelerations and estimated empirical accelerations computed with the GHOST tools. In along track direction they show the best resemblance, with a mean correlation coefficient of 93% for the same period. In radial and normal direction the correlation is smaller. During days of high solar activity the benefit of using accelerometer observations is clearly visible. The observations during these days show fluctuations which the modelled and empirical accelerations can not follow.

  16. Estimation of a super-resolved PSF for the data reduction of undersampled stellar observations. Deriving an accurate model for fitting photometry with Corot space telescope

    NASA Astrophysics Data System (ADS)

    Pinheiro da Silva, L.; Auvergne, M.; Toublanc, D.; Rowe, J.; Kuschnig, R.; Matthews, J.

    2006-06-01

    Context: .Fitting photometry algorithms can be very effective provided that an accurate model of the instrumental point spread function (PSF) is available. When high-precision time-resolved photometry is required, however, the use of point-source star images as empirical PSF models can be unsatisfactory, due to the limits in their spatial resolution. Theoretically-derived models, on the other hand, are limited by the unavoidable assumption of simplifying hypothesis, while the use of analytical approximations is restricted to regularly-shaped PSFs. Aims: .This work investigates an innovative technique for space-based fitting photometry, based on the reconstruction of an empirical but properly-resolved PSF. The aim is the exploitation of arbitrary star images, including those produced under intentional defocus. The cases of both MOST and COROT, the first space telescopes dedicated to time-resolved stellar photometry, are considered in the evaluation of the effectiveness and performances of the proposed methodology. Methods: .PSF reconstruction is based on a set of star images, periodically acquired and presenting relative subpixel displacements due to motion of the acquisition system, in this case the jitter of the satellite attitude. Higher resolution is achieved through the solution of the inverse problem. The approach can be regarded as a special application of super-resolution techniques, though a specialised procedure is proposed to better meet the PSF determination problem specificities. The application of such a model to fitting photometry is illustrated by numerical simulations for COROT and on a complete set of observations from MOST. Results: .We verify that, in both scenarios, significantly better resolved PSFs can be estimated, leading to corresponding improvements in photometric results. For COROT, indeed, subpixel reconstruction enabled the successful use of fitting algorithms despite its rather complex PSF profile, which could hardly be modeled otherwise. For MOST, whose direct-imaging PSF is closer to the ordinary, comparison to other models or photometry techniques were carried out and confirmed the potential of PSF reconstruction in real observational conditions.

  17. Body Topography Parcellates Human Sensory and Motor Cortex.

    PubMed

    Kuehn, Esther; Dinse, Juliane; Jakobsen, Estrid; Long, Xiangyu; Schäfer, Andreas; Bazin, Pierre-Louis; Villringer, Arno; Sereno, Martin I; Margulies, Daniel S

    2017-07-01

    The cytoarchitectonic map as proposed by Brodmann currently dominates models of human sensorimotor cortical structure, function, and plasticity. According to this model, primary motor cortex, area 4, and primary somatosensory cortex, area 3b, are homogenous areas, with the major division lying between the two. Accumulating empirical and theoretical evidence, however, has begun to question the validity of the Brodmann map for various cortical areas. Here, we combined in vivo cortical myelin mapping with functional connectivity analyses and topographic mapping techniques to reassess the validity of the Brodmann map in human primary sensorimotor cortex. We provide empirical evidence that area 4 and area 3b are not homogenous, but are subdivided into distinct cortical fields, each representing a major body part (the hand and the face). Myelin reductions at the hand-face borders are cortical layer-specific, and coincide with intrinsic functional connectivity borders as defined using large-scale resting state analyses. Our data extend the Brodmann model in human sensorimotor cortex and suggest that body parts are an important organizing principle, similar to the distinction between sensory and motor processing. © The Author 2017. Published by Oxford University Press.

  18. Modeling food matrix effects on chemical reactivity: Challenges and perspectives.

    PubMed

    Capuano, Edoardo; Oliviero, Teresa; van Boekel, Martinus A J S

    2017-06-29

    The same chemical reaction may be different in terms of its position of the equilibrium (i.e., thermodynamics) and its kinetics when studied in different foods. The diversity in the chemical composition of food and in its structural organization at macro-, meso-, and microscopic levels, that is, the food matrix, is responsible for this difference. In this viewpoint paper, the multiple, and interconnected ways the food matrix can affect chemical reactivity are summarized. Moreover, mechanistic and empirical approaches to explain and predict the effect of food matrix on chemical reactivity are described. Mechanistic models aim to quantify the effect of food matrix based on a detailed understanding of the chemical and physical phenomena occurring in food. Their applicability is limited at the moment to very simple food systems. Empirical modeling based on machine learning combined with data-mining techniques may represent an alternative, useful option to predict the effect of the food matrix on chemical reactivity and to identify chemical and physical properties to be further tested. In such a way the mechanistic understanding of the effect of the food matrix on chemical reactions can be improved.

  19. Aircraft High-Lift Aerodynamic Analysis Using a Surface-Vorticity Solver

    NASA Technical Reports Server (NTRS)

    Olson, Erik D.; Albertson, Cindy W.

    2016-01-01

    This study extends an existing semi-empirical approach to high-lift analysis by examining its effectiveness for use with a three-dimensional aerodynamic analysis method. The aircraft high-lift geometry is modeled in Vehicle Sketch Pad (OpenVSP) using a newly-developed set of techniques for building a three-dimensional model of the high-lift geometry, and for controlling flap deflections using scripted parameter linking. Analysis of the low-speed aerodynamics is performed in FlightStream, a novel surface-vorticity solver that is expected to be substantially more robust and stable compared to pressure-based potential-flow solvers and less sensitive to surface perturbations. The calculated lift curve and drag polar are modified by an empirical lift-effectiveness factor that takes into account the effects of viscosity that are not captured in the potential-flow solution. Analysis results are validated against wind-tunnel data for The Energy-Efficient Transport AR12 low-speed wind-tunnel model, a 12-foot, full-span aircraft configuration with a supercritical wing, full-span slats, and part-span double-slotted flaps.

  20. A model of clutter for complex, multivariate geospatial displays.

    PubMed

    Lohrenz, Maura C; Trafton, J Gregory; Beck, R Melissa; Gendron, Marlin L

    2009-02-01

    A novel model of measuring clutter in complex geospatial displays was compared with human ratings of subjective clutter as a measure of convergent validity. The new model is called the color-clustering clutter (C3) model. Clutter is a known problem in displays of complex data and has been shown to affect target search performance. Previous clutter models are discussed and compared with the C3 model. Two experiments were performed. In Experiment 1, participants performed subjective clutter ratings on six classes of information visualizations. Empirical results were used to set two free parameters in the model. In Experiment 2, participants performed subjective clutter ratings on aeronautical charts. Both experiments compared and correlated empirical data to model predictions. The first experiment resulted in a .76 correlation between ratings and C3. The second experiment resulted in a .86 correlation, significantly better than results from a model developed by Rosenholtz et al. Outliers to our correlation suggest further improvements to C3. We suggest that (a) the C3 model is a good predictor of subjective impressions of clutter in geospatial displays, (b) geospatial clutter is a function of color density and saliency (primary C3 components), and (c) pattern analysis techniques could further improve C3. The C3 model could be used to improve the design of electronic geospatial displays by suggesting when a display will be too cluttered for its intended audience.

  1. Larval Connectivity and the International Management of Fisheries

    PubMed Central

    Kough, Andrew S.; Paris, Claire B.; Butler, Mark J.

    2013-01-01

    Predicting the oceanic dispersal of planktonic larvae that connect scattered marine animal populations is difficult, yet crucial for management of species whose movements transcend international boundaries. Using multi-scale biophysical modeling techniques coupled with empirical estimates of larval behavior and gamete production, we predict and empirically verify spatio-temporal patterns of larval supply and describe the Caribbean-wide pattern of larval connectivity for the Caribbean spiny lobster (Panulirus argus), an iconic coral reef species whose commercial value approaches $1 billion USD annually. Our results provide long sought information needed for international cooperation in the management of marine resources by identifying lobster larval connectivity and dispersal pathways throughout the Caribbean. Moreover, we outline how large-scale fishery management could explicitly recognize metapopulation structure by considering larval transport dynamics and pelagic larval sanctuaries. PMID:23762273

  2. Endogenously determined cycles: empirical evidence from livestock industries.

    PubMed

    McCullough, Michael P; Huffaker, Ray; Marsh, Thomas L

    2012-04-01

    This paper applies the techniques of phase space reconstruction and recurrence quantification analysis to investigate U.S. livestock cycles in relation to recent literature on the business cycle. Results are presented for pork and cattle cycles, providing empirical evidence that the cycles themselves have slowly diminished. By comparing the evolution of production processes for the two livestock cycles we argue that the major cause for this moderation is largely endogenous. The analysis suggests that previous theoretical models relying solely on exogenous shocks to create cyclical patterns do not fully capture changes in system dynamics. Specifically, the biological constraint in livestock dynamics has become less significant while technology and information are relatively more significant. Concurrently, vertical integration of the supply chain may have improved inventory management, all resulting in a small, less deterministic, cyclical effect.

  3. Using remote sensing and GIS techniques to estimate discharge and recharge fluxes for the Death Valley regional groundwater flow system, USA

    USGS Publications Warehouse

    D'Agnese, F. A.; Faunt, C.C.; Turner, A.K.; ,

    1996-01-01

    The recharge and discharge components of the Death Valley regional groundwater flow system were defined by techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were used to calculate discharge volumes for these area. An empirical method of groundwater recharge estimation was modified to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.

  4. Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects

    NASA Astrophysics Data System (ADS)

    Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca

    2018-02-01

    Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.

  5. A modeling technique for STOVL ejector and volume dynamics

    NASA Technical Reports Server (NTRS)

    Drummond, C. K.; Barankiewicz, W. S.

    1990-01-01

    New models for thrust augmenting ejector performance prediction and feeder duct dynamic analysis are presented and applied to a proposed Short Take Off and Vertical Landing (STOVL) aircraft configuration. Central to the analysis is the nontraditional treatment of the time-dependent volume integrals in the otherwise conventional control-volume approach. In the case of the thrust augmenting ejector, the analysis required a new relationship for transfer of kinetic energy from the primary flow to the secondary flow. Extraction of the required empirical corrections from current steady-state experimental data is discussed; a possible approach for modeling insight through Computational Fluid Dynamics (CFD) is presented.

  6. Diminishing detonator effectiveness through electromagnetic effects

    DOEpatents

    Schill, Jr, Robert A.

    2016-09-20

    An inductively coupled transmission line with distributed electromotive force source and an alternative coupling model based on empirical data and theory were developed to initiate bridge wire melt for a detonator with an open and a short circuit detonator load. In the latter technique, the model was developed to exploit incomplete knowledge of the open circuited detonator using tendencies common to all of the open circuit loads examined. Military, commercial, and improvised detonators were examined and modeled. Nichrome, copper, platinum, and tungsten are the detonator specific bridge wire materials studied. The improvised detonators were made typically made with tungsten wire and copper (.about.40 AWG wire strands) wire.

  7. Estimating procedure times for surgeries by determining location parameters for the lognormal model.

    PubMed

    Spangler, William E; Strum, David P; Vargas, Luis G; May, Jerrold H

    2004-05-01

    We present an empirical study of methods for estimating the location parameter of the lognormal distribution. Our results identify the best order statistic to use, and indicate that using the best order statistic instead of the median may lead to less frequent incorrect rejection of the lognormal model, more accurate critical value estimates, and higher goodness-of-fit. Using simulation data, we constructed and compared two models for identifying the best order statistic, one based on conventional nonlinear regression and the other using a data mining/machine learning technique. Better surgical procedure time estimates may lead to improved surgical operations.

  8. Readout models for BaFBr0.85I0.15:Eu image plates

    NASA Astrophysics Data System (ADS)

    Stoeckl, M.; Solodov, A. A.

    2018-06-01

    The linearity of the photostimulated luminescence process makes repeated image-plate scanning a viable technique to extract a more dynamic range. In order to obtain a response estimate, two semi-empirical models for the readout fading of an image plate are introduced; they relate the depth distribution of activated photostimulated luminescence centers within an image plate to the recorded signal. Model parameters are estimated from image-plate scan series with BAS-MS image plates and the Typhoon FLA 7000 scanner for the hard x-ray image-plate diagnostic over a collection of experiments providing x-ray energy spectra whose approximate shape is a double exponential.

  9. Longitudinal Control for Mengshi Autonomous Vehicle via Cloud Model

    NASA Astrophysics Data System (ADS)

    Gao, H. B.; Zhang, X. Y.; Li, D. Y.; Liu, Y. C.

    2018-03-01

    Dynamic robustness and stability control is a requirement for self-driving of autonomous vehicle. Longitudinal control method of autonomous is a key technique which has drawn the attention of industry and academe. In this paper, we present a longitudinal control algorithm based on cloud model for Mengshi autonomous vehicle to ensure the dynamic stability and tracking performance of Mengshi autonomous vehicle. An experiments is applied to test the implementation of the longitudinal control algorithm. Empirical results show that if the longitudinal control algorithm based Gauss cloud model are applied to calculate the acceleration, and the vehicles drive at different speeds, a stable longitudinal control effect is achieved.

  10. Optimal non-linear health insurance.

    PubMed

    Blomqvist, A

    1997-06-01

    Most theoretical and empirical work on efficient health insurance has been based on models with linear insurance schedules (a constant co-insurance parameter). In this paper, dynamic optimization techniques are used to analyse the properties of optimal non-linear insurance schedules in a model similar to one originally considered by Spence and Zeckhauser (American Economic Review, 1971, 61, 380-387) and reminiscent of those that have been used in the literature on optimal income taxation. The results of a preliminary numerical example suggest that the welfare losses from the implicit subsidy to employer-financed health insurance under US tax law may be a good deal smaller than previously estimated using linear models.

  11. Atmospheric gradients from GNSS, VLBI, and DORIS analyses and from Numerical Weather Models during CONT14

    NASA Astrophysics Data System (ADS)

    Heinkelmann, Robert; Dick, Galina; Nilsson, Tobias; Soja, Benedikt; Wickert, Jens; Zus, Florian; Schuh, Harald

    2015-04-01

    Observations from space-geodetic techniques are nowadays increasingly used to derive atmospheric information for various commercial and scientific applications. A prominent example is the operational use of GNSS data to improve global and regional weather forecasts, which was started in 2006. Atmosphere gradients describe the azimuthal asymmetry of zenith delays. Estimates of geodetic and other parameters significantly improve when atmosphere gradients are determined in addition. Here we assess the capability of several space geodetic techniques (GNSS, VLBI, DORIS) to determine atmosphere gradients of refractivity. For this purpose we implement and compare various strategies for gradient estimation, such as different values for the temporal resolution and the corresponding parameter constraints. Applying least squares estimation the gradients are usually deterministically modelled as constants or piece-wise linear functions. In our study we compare this approach with a stochastic approach modelling atmosphere gradients as random walk processes and applying a Kalman Filter for parameter estimation. The gradients, derived from space geodetic techniques are verified by comparison with those derived from Numerical Weather Models (NWM). These model data were generated using raytracing calculations based on European Centre for Medium-Range Weather Forecast (ECMWF) and National Centers for Environmental Prediction (NCEP) analyses with different spatial resolutions. The investigation of the differences between the ECMWF and NCEP gradients hereby in addition allow for an empirical assessment of the quality of model gradients and how suitable the NWM data are for verification. CONT14 (2014-05-06 until 2014-05-20) is the youngest two week long continuous VLBI campaign carried out by IVS (International VLBI Service for Geodesy and Astrometry). It presents the state-of-the-art VLBI performance in terms of number of stations and number of observations and presents thus an excellent test period for comparisons with other space geodetic techniques. During the VLBI campaign CONT14 the HOBART12 and HOBART26 (Hobart, Tasmania, Australia) VLBI antennas were involved that co-locate with each other. The investigation of the gradient estimate differences from these co-located antennas allows for a valuable empirical quality assessment. Another quality criterion for gradient estimates are the differences of parameters at the borders of adjacent 24h-sessions. Both are investigated in our study.

  12. Frontiers of Theoretical Research on Shape Memory Alloys: A General Overview

    NASA Astrophysics Data System (ADS)

    Chowdhury, Piyas

    2018-03-01

    In this concise review, general aspects of modeling shape memory alloys (SMAs) are recounted. Different approaches are discussed under four general categories, namely, (a) macro-phenomenological, (b) micromechanical, (c) molecular dynamics, and (d) first principles models. Macro-phenomenological theories, stemming from empirical formulations depicting continuum elastic, plastic, and phase transformation, are primarily of engineering interest, whereby the performance of SMA-made components is investigated. Micromechanical endeavors are generally geared towards understanding microstructural phenomena within continuum mechanics such as the accommodation of straining due to phase change as well as role of precipitates. By contrast, molecular dynamics, being a more recently emerging computational technique, concerns attributes of discrete lattice structures, and thus captures SMA deformation mechanism by means of empirically reconstructing interatomic bonding forces. Finally, ab initio theories utilize quantum mechanical framework to peek into atomistic foundation of deformation, and can pave the way for studying the role of solid-sate effects. With specific examples, this paper provides concise descriptions of each category along with their relative merits and emphases.

  13. Imaging the topside ionosphere and plasmasphere with ionospheric tomography using COSMIC GPS TEC

    NASA Astrophysics Data System (ADS)

    Pinto Jayawardena, Talini S.; Chartier, Alex T.; Spencer, Paul; Mitchell, Cathryn N.

    2016-01-01

    GPS-based ionospheric tomography is a well-known technique for imaging the total electron content (TEC) between GPS satellites and receivers. However, as an integral measurement of electron concentration, TEC typically encompasses both the ionosphere and plasmasphere, masking signatures from the topside ionosphere-plasmasphere due to the dominant ionosphere. Imaging these regions requires a technique that isolates TEC in the topside ionosphere-plasmasphere. Multi-Instrument Data Analysis System (MIDAS) employs tomography to image the electron distribution in the ionosphere. Its implementation for regions beyond is yet to be seen due to the different dynamics present above the ionosphere. This paper discusses the extension of MIDAS to image these altitudes using GPS phase-based TEC measurements and follows the work by Spencer and Mitchell (2011). Plasma is constrained to dipole field lines described by Euler potentials, resulting in a distribution symmetrical about the geomagnetic equator. A simulation of an empirical plasmaspheric model by Gallagher et al. (1988) is used to verify the technique by comparing reconstructions of the simulation with the empirical model. The Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) is used as GPS receiver locations. The verification is followed by a validation of the modified MIDAS algorithm, where the regions' TEC is reconstructed from COSMIC GPS phase measurements and qualitatively compared with previous studies using Jason-1 and COSMIC data. Results show that MIDAS can successfully image features/trends of the topside ionosphere-plasmasphere observed in other studies, with deviations in absolute TEC attributed to differences in data set properties and the resolution of the images.

  14. Modelling and multi objective optimization of WEDM of commercially Monel super alloy using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Varun, Sajja; Reddy, Kalakada Bhargav Bal; Vardhan Reddy, R. R. Vishnu

    2016-09-01

    In this research work, development of a multi response optimization technique has been undertaken, using traditional desirability analysis and non-traditional particle swarm optimization techniques (for different customer's priorities) in wire electrical discharge machining (WEDM). Monel 400 has been selected as work material for experimentation. The effect of key process parameters such as pulse on time (TON), pulse off time (TOFF), peak current (IP), wire feed (WF) were on material removal rate (MRR) and surface roughness(SR) in WEDM operation were investigated. Further, the responses such as MRR and SR were modelled empirically through regression analysis. The developed models can be used by the machinists to predict the MRR and SR over a wide range of input parameters. The optimization of multiple responses has been done for satisfying the priorities of multiple users by using Taguchi-desirability function method and particle swarm optimization technique. The analysis of variance (ANOVA) is also applied to investigate the effect of influential parameters. Finally, the confirmation experiments were conducted for the optimal set of machining parameters, and the betterment has been proved.

  15. Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2014-01-01

    This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.

  16. Thermal Conductivity of Metallic Uranium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hin, Celine

    This project has developed a modeling and simulation approaches to predict the thermal conductivity of metallic fuels and their alloys. We focus on two methods. The first method has been developed by the team at the University of Wisconsin Madison. They developed a practical and general modeling approach for thermal conductivity of metals and metal alloys that integrates ab-initio and semi-empirical physics-based models to maximize the strengths of both techniques. The second method has been developed by the team at Virginia Tech. This approach consists of a determining the thermal conductivity using only ab-initio methods without any fitting parameters. Bothmore » methods were complementary. The models incorporated both phonon and electron contributions. Good agreement with experimental data over a wide temperature range were found. The models also provided insight into the different physical factors that govern the thermal conductivity under different temperatures. The models were general enough to incorporate more complex effects like additional alloying species, defects, transmutation products and noble gas bubbles to predict the behavior of complex metallic alloys like U-alloy fuel systems under burnup. 3 Introduction Thermal conductivity is an important thermal physical property affecting the performance and efficiency of metallic fuels [1]. Some experimental measurement of thermal conductivity and its correlation with composition and temperature from empirical fitting are available for U, Zr and their alloys with Pu and other minor actinides. However, as reviewed in by Kim, Cho and Sohn [2], due to the difficulty in doing experiments on actinide materials, thermal conductivities of metallic fuels have only been measured at limited alloy compositions and temperatures, some of them even being negative and unphysical. Furthermore, the correlations developed so far are empirical in nature and may not be accurate when used for prediction at conditions far from those used in the original fitting. Moreover, as fuels burn up in the reactor and fission products are built up, thermal conductivity is also significantly changed [3]. Unfortunately, fundamental understanding of the effect of fission products is also currently lacking. In this project, we probe thermal conductivity of metallic fuels with ab initio calculations, a theoretical tool with the potential to yield better accuracy and predictive power than empirical fitting. This work will both complement experimental data by determining thermal conductivity in wider composition and temperature ranges than is available experimentally, and also develop mechanistic understanding to guide better design of metallic fuels in the future. So far, we focused on α-U perfect crystal, the ground-state phase of U metal. We focus on two methods. The first method has been developed by the team at the University of Wisconsin Madison. They developed a practical and general modeling approach for thermal conductivity of metals and metal alloys that integrates ab-initio and semi-empirical physics-based models to maximize the strengths of both techniques. The second method has been developed by the team at Virginia Tech. This approach consists of a determining the thermal conductivity using only ab-initio methods without any fitting parameters. Both methods were complementary and very helpful to understand the physics behind the thermal conductivity in metallic uranium and other materials with similar characteristics. In Section I, the combined model developed at UWM is explained. In Section II, the ab-initio method developed at VT is described along with the uranium pseudo-potential and its validation. Section III is devoted to the work done by Jianguo Yu at INL. Finally, we will present the performance of the project in terms of milestones, publications, and presentations.« less

  17. Propeller sheet cavitation noise source modeling and inversion

    NASA Astrophysics Data System (ADS)

    Lee, Keunhwa; Lee, Jaehyuk; Kim, Dongho; Kim, Kyungseop; Seong, Woojae

    2014-02-01

    Propeller sheet cavitation is the main contributor to high level of noise and vibration in the after body of a ship. Full measurement of the cavitation-induced hull pressure over the entire surface of the affected area is desired but not practical. Therefore, using a few measurements on the outer hull above the propeller in a cavitation tunnel, empirical or semi-empirical techniques based on physical model have been used to predict the hull-induced pressure (or hull-induced force). In this paper, with the analytic source model for sheet cavitation, a multi-parameter inversion scheme to find the positions of noise sources and their strengths is suggested. The inversion is posed as a nonlinear optimization problem, which is solved by the optimization algorithm based on the adaptive simplex simulated annealing algorithm. Then, the resulting hull pressure can be modeled with boundary element method from the inverted cavitation noise sources. The suggested approach is applied to the hull pressure data measured in a cavitation tunnel of the Samsung Heavy Industry. Two monopole sources are adequate to model the propeller sheet cavitation noise. The inverted source information is reasonable with the cavitation dynamics of the propeller and the modeled hull pressure shows good agreement with cavitation tunnel experimental data.

  18. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    PubMed

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  19. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922

  20. Modelling land use change with generalized linear models--a multi-model analysis of change between 1860 and 2000 in Gallatin Valley, Montana.

    PubMed

    Aspinall, Richard

    2004-08-01

    This paper develops an approach to modelling land use change that links model selection and multi-model inference with empirical models and GIS. Land use change is frequently studied, and understanding gained, through a process of modelling that is an empirical analysis of documented changes in land cover or land use patterns. The approach here is based on analysis and comparison of multiple models of land use patterns using model selection and multi-model inference. The approach is illustrated with a case study of rural housing as it has developed for part of Gallatin County, Montana, USA. A GIS contains the location of rural housing on a yearly basis from 1860 to 2000. The database also documents a variety of environmental and socio-economic conditions. A general model of settlement development describes the evolution of drivers of land use change and their impacts in the region. This model is used to develop a series of different models reflecting drivers of change at different periods in the history of the study area. These period specific models represent a series of multiple working hypotheses describing (a) the effects of spatial variables as a representation of social, economic and environmental drivers of land use change, and (b) temporal changes in the effects of the spatial variables as the drivers of change evolve over time. Logistic regression is used to calibrate and interpret these models and the models are then compared and evaluated with model selection techniques. Results show that different models are 'best' for the different periods. The different models for different periods demonstrate that models are not invariant over time which presents challenges for validation and testing of empirical models. The research demonstrates (i) model selection as a mechanism for rating among many plausible models that describe land cover or land use patterns, (ii) inference from a set of models rather than from a single model, (iii) that models can be developed based on hypothesised relationships based on consideration of underlying and proximate causes of change, and (iv) that models are not invariant over time.

  1. Estimating distribution and connectivity of recolonizing American marten in the northeastern United States using expert elicitation techniques

    USGS Publications Warehouse

    Aylward, C.M.; Murdoch, J.D.; Donovan, Therese M.; Kilpatrick, C.W.; Bernier, C.; Katz, J.

    2018-01-01

    The American marten Martes americana is a species of conservation concern in the northeastern United States due to widespread declines from over‐harvesting and habitat loss. Little information exists on current marten distribution and how landscape characteristics shape patterns of occupancy across the region, which could help develop effective recovery strategies. The rarity of marten and lack of historical distribution records are also problematic for region‐wide conservation planning. Expert opinion can provide a source of information for estimating species–landscape relationships and is especially useful when empirical data are sparse. We created a survey to elicit expert opinion and build a model that describes marten occupancy in the northeastern United States as a function of landscape conditions. We elicited opinions from 18 marten experts that included wildlife managers, trappers and researchers. Each expert estimated occupancy probability at 30 sites in their geographic region of expertise. We, then, fit the response data with a set of 58 models that incorporated the effects of covariates related to forest characteristics, climate, anthropogenic impacts and competition at two spatial scales (1.5 and 5 km radii), and used model selection techniques to determine the best model in the set. Three top models had strong empirical support, which we model averaged based on AIC weights. The final model included effects of five covariates at the 5‐km scale: percent canopy cover (positive), percent spruce‐fir land cover (positive), winter temperature (negative), elevation (positive) and road density (negative). A receiver operating characteristic curve indicated that the model performed well based on recent occurrence records. We mapped distribution across the region and used circuit theory to estimate movement corridors between isolated core populations. The results demonstrate the effectiveness of expert‐opinion data at modeling occupancy for rare species and provide tools for planning marten recovery in the northeastern United States.

  2. An Empirical Formula From Ion Exchange Chromatography and Colorimetry.

    ERIC Educational Resources Information Center

    Johnson, Steven D.

    1996-01-01

    Presents a detailed procedure for finding an empirical formula from ion exchange chromatography and colorimetry. Introduces students to more varied techniques including volumetric manipulation, titration, ion-exchange, preparation of a calibration curve, and the use of colorimetry. (JRH)

  3. The Role of Flow Diagnostic Techniques in Fan and Open Rotor Noise Modeling

    NASA Technical Reports Server (NTRS)

    Envia, Edmane

    2016-01-01

    A principal source of turbomachinery noise is the interaction of the rotating and stationary blade rows with the perturbations in the airstream through the engine. As such, a lot of research has been devoted to the study of the turbomachinery noise generation mechanisms. This is particularly true of fan and open rotors, both of which are the major contributors to the overall noise output of modern aircraft engines. Much of the research in fan and open rotor noise has been focused on developing theoretical models for predicting their noise characteristics. These models, which run the gamut from the semi-empirical to fully computational ones, are, in one form or another, informed by the description of the unsteady flow-field in which the propulsors (i.e., the fan and open rotors) operate. Not surprisingly, the fidelity of the theoretical models is dependent, to a large extent, on capturing the nuances of the unsteady flowfield that have a direct role in the noise generation process. As such, flow diagnostic techniques have proven to be indispensible in identifying the shortcoming of theoretical models and in helping to improve them. This presentation will provide a few examples of the role of flow diagnostic techniques in assessing the fidelity and robustness of the fan and open rotor noise prediction models.

  4. The 'robustness' of vocabulary intervention in the public schools: targets and techniques employed in speech-language therapy.

    PubMed

    Justice, Laura M; Schmitt, Mary Beth; Murphy, Kimberly A; Pratt, Amy; Biancone, Tricia

    2014-01-01

    This study examined vocabulary intervention-in terms of targets and techniques-for children with language impairment receiving speech-language therapy in public schools (i.e., non-fee-paying schools) in the United States. Vocabulary treatments and targets were examined with respect to their alignment with the empirically validated practice of rich vocabulary intervention. Participants were forty-eight 5-7-year-old children participating in kindergarten or the first-grade year of school, all of whom had vocabulary-specific goals on their individualized education programmes. Two therapy sessions per child were coded to determine what vocabulary words were being directly targeted and what techniques were used for each. Study findings showed that the majority of words directly targeted during therapy were lower-level basic vocabulary words (87%) and very few (1%) were academically relevant. On average, three techniques were used per word to promote deep understanding. Interpreting findings against empirical descriptions of rich vocabulary intervention indicates that children were exposed to some but not all aspects of this empirically supported practice. © 2013 Royal College of Speech and Language Therapists.

  5. Exponential model for option prices: Application to the Brazilian market

    NASA Astrophysics Data System (ADS)

    Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.

    2016-03-01

    In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.

  6. Variable diffusion in stock market fluctuations

    NASA Astrophysics Data System (ADS)

    Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2015-02-01

    We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.

  7. Model-based guiding pattern synthesis for robust assembly of contact layers using directed self-assembly

    NASA Astrophysics Data System (ADS)

    Mitra, Joydeep; Torres, Andres; Ma, Yuansheng; Pan, David Z.

    2018-01-01

    Directed self-assembly (DSA) has emerged as one of the most compelling next-generation patterning techniques for sub 7 nm via or contact layers. A key issue in enabling DSA as a mainstream patterning technique is the generation of grapho-epitaxy-based guiding pattern (GP) shapes to assemble the contact patterns on target with high fidelity and resolution. Current GP generation is mostly empirical, and limited to a very small number of via configurations. We propose the first model-based GP synthesis algorithm and methodology for on-target and robust DSA, on general via pattern configurations. The final postoptical proximity correction-printed GPs derived from our original synthesized GPs are resilient to process variations and continue to maintain the same DSA fidelity in terms of placement error and target shape.

  8. Effect of the curvature parameter on least-squares prediction within poor data coverage: case study for Africa

    NASA Astrophysics Data System (ADS)

    Abd-Elmotaal, Hussein; Kühtreiber, Norbert

    2016-04-01

    In the framework of the IAG African Geoid Project, there are a lot of large data gaps in its gravity database. These gaps are filled initially using unequal weight least-squares prediction technique. This technique uses a generalized Hirvonen covariance function model to replace the empirically determined covariance function. The generalized Hirvonen covariance function model has a sensitive parameter which is related to the curvature parameter of the covariance function at the origin. This paper studies the effect of the curvature parameter on the least-squares prediction results, especially in the large data gaps as appearing in the African gravity database. An optimum estimation of the curvature parameter has also been carried out. A wide comparison among the results obtained in this research along with their obtained accuracy is given and thoroughly discussed.

  9. Optical properties and progressive sterical hindering in pyridinium phenoxides

    NASA Astrophysics Data System (ADS)

    Boeglin, A.; Barsella, A.; Fort, A.; Mançois, F.; Rodriguez, V.; Diemer, V.; Chaumeil, H.; Defoin, A.; Jacques, P.; Carré, C.

    2007-07-01

    Pyridinium phenoxides are model compounds associating large dipole moments with high optical nonlinearities. A progression of sterically hindered forms of such zwitterions has been synthesized in order to investigate their structure/property relationships. Their UV-vis absorption in acetonitrile has been analyzed as a function of concentration in order to assess the presence of aggregates and the level of protonation. The quadratic optical properties have been measured by the EFISH and hyper-Rayleigh techniques and are interpreted via semi-empirical calculations. The solvation model used leads to results that agree with our experimental findings indicating an increased response for intermediate twist angles.

  10. Interference in astronomical speckle patterns

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.

    1976-01-01

    Astronomical speckle patterns are examined in an atmospheric-optics context in order to determine what kind of image quality is to be expected from several different imaging techniques. The model used to describe the instantaneous complex field distribution across the pupil of a large telescope regards the pupil as a deep phase grating with a periodicity given by the size of the cell of uniform phase or the refractive index structure function. This model is used along with an empirical formula derived purely from the physical appearance of the speckle patterns to discuss the orders of interference in astronomical speckle patterns.

  11. An Application of Epidemiological Modeling to Information Diffusion

    NASA Astrophysics Data System (ADS)

    McCormack, Robert; Salter, William

    Messages often spread within a population through unofficial - particularly web-based - media. Such ideas have been termed "memes." To impede the flow of terrorist messages and to promote counter messages within a population, intelligence analysts must understand how messages spread. We used statistical language processing technologies to operationalize "memes" as latent topics in electronic text and applied epidemiological techniques to describe and analyze patterns of message propagation. We developed our methods and applied them to English-language newspapers and blogs in the Arab world. We found that a relatively simple epidemiological model can reproduce some dynamics of observed empirical relationships.

  12. Crystal growth of calcite from calcium bicarbonate solutions at constant PCO2 and 25°C: a test of a calcite dissolution model

    USGS Publications Warehouse

    Reddy, Michael M.; Plummer, Niel; Busenberg, E.

    1981-01-01

    A highly reproducible seeded growth technique was used to study calcite crystallization from calcium bicarbonate solutions at 25°C and fixed carbon dioxide partial pressures between 0.03 and 0.3 atm. The results are not consistent with empirical crystallization models that have successfully described calcite growth at low PCO2 (< 10−3 atm). Good agreement was found between observed crystallization rates and those calculated from the calcite dissolution rate law and mechanism proposed by Plummer et al. (1978).

  13. Introduction to the special section on mixture modeling in personality assessment.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  14. Structural Equation Model Trees

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

    2015-01-01

    In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree structures that separate a data set recursively into subsets with significantly different parameter estimates in a SEM. SEM Trees provide means for finding covariates and covariate interactions that predict differences in structural parameters in observed as well as in latent space and facilitate theory-guided exploration of empirical data. We describe the methodology, discuss theoretical and practical implications, and demonstrate applications to a factor model and a linear growth curve model. PMID:22984789

  15. Implementation of model predictive control for resistive wall mode stabilization on EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2015-10-01

    A model predictive control (MPC) method for stabilization of the resistive wall mode (RWM) in the EXTRAP T2R reversed-field pinch is presented. The system identification technique is used to obtain a linearized empirical model of EXTRAP T2R. MPC employs the model for prediction and computes optimal control inputs that satisfy performance criterion. The use of a linearized form of the model allows for compact formulation of MPC, implemented on a millisecond timescale, that can be used for real-time control. The design allows the user to arbitrarily suppress any selected Fourier mode. The experimental results from EXTRAP T2R show that the designed and implemented MPC successfully stabilizes the RWM.

  16. Bias-dependent hybrid PKI empirical-neural model of microwave FETs

    NASA Astrophysics Data System (ADS)

    Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera

    2011-10-01

    Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.

  17. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1993-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  18. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1992-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  19. The Use of EPI-Splines to Model Empirical Semivariograms for Optimal Spatial Estimation

    DTIC Science & Technology

    2016-09-01

    proliferation of unmanned systems in military and civilian sectors has occurred at lightning speed. In the case of Autonomous Underwater Vehicles or...SLAM is a method of position estimation that relies on map data [3]. In this process, the creation of the map occurs as the vehicle is navigating the...that ensures minimal errors. This technique is accomplished in two steps. The first step is creation of the semivariogram. The semivariogram is a

  20. Thermographic imaging of the space shuttle during re-entry using a near-infrared sensor

    NASA Astrophysics Data System (ADS)

    Zalameda, Joseph N.; Horvath, Thomas J.; Kerns, Robbie V.; Burke, Eric R.; Taylor, Jeff C.; Spisz, Tom; Gibson, David M.; Shea, Edward J.; Mercer, C. David; Schwartz, Richard J.; Tack, Steve; Bush, Brett C.; Dantowitz, Ronald F.; Kozubal, Marek J.

    2012-06-01

    High resolution calibrated near infrared (NIR) imagery of the Space Shuttle Orbiter was obtained during hypervelocity atmospheric re-entry of the STS-119, STS-125, STS-128, STS-131, STS-132, STS-133, and STS-134 missions. This data has provided information on the distribution of surface temperature and the state of the airflow over the windward surface of the Orbiter during descent. The thermal imagery complemented data collected with onboard surface thermocouple instrumentation. The spatially resolved global thermal measurements made during the Orbiter's hypersonic re-entry will provide critical flight data for reducing the uncertainty associated with present day ground-to-flight extrapolation techniques and current state-of-the-art empirical boundary-layer transition or turbulent heating prediction methods. Laminar and turbulent flight data is critical for the validation of physics-based, semi-empirical boundary-layer transition prediction methods as well as stimulating the validation of laminar numerical chemistry models and the development of turbulence models supporting NASA's next-generation spacecraft. In this paper we provide details of the NIR imaging system used on both air and land-based imaging assets. The paper will discuss calibrations performed on the NIR imaging systems that permitted conversion of captured radiant intensity (counts) to temperature values. Image processing techniques are presented to analyze the NIR data for vignetting distortion, best resolution, and image sharpness.

  1. Remote and In Situ Observations of an Unusual Earth-Directed Coronal Mass Ejection from Multiple Viewpoints

    NASA Technical Reports Server (NTRS)

    Nieves-Chinchilla, T.; Colaninno, R.; Vourlidas, A.; Szabo, A.; Lepping, R. P.; Boardsen, S. A.; Anderson, B. J.; Korth, H.

    2012-01-01

    During June 16-21, 2010, an Earth-directed Coronal Mass Ejection (CME) event was observed by instruments onboard STEREO, SOHO, MESSENGER and Wind. This event was the first direct detection of a rotating CME in the middle and outer corona. Here, we carry out a comprehensive analysis of the evolution of the CME in the interplanetary medium comparing in-situ and remote observations, with analytical models and three-dimensional reconstructions. In particular, we investigate the parallel and perpendicular cross section expansion of the CME from the corona through the heliosphere up to 1 AU. We use height-time measurements and the Gradual Cylindrical Shell (GCS) technique to model the imaging observations, remove the projection effects, and derive the 3-dimensional extent of the event. Then, we compare the results with in-situ analytical Magnetic Cloud (MC) models, and with geometrical predictions from past works. We nd that the parallel (along the propagation plane) cross section expansion agrees well with the in-situ model and with the Bothmer & Schwenn [1998] empirical relationship based on in-situ observations between 0.3 and 1 AU. Our results effectively extend this empirical relationship to about 5 solar radii. The expansion of the perpendicular diameter agrees very well with the in-situ results at MESSENGER ( 0:5 AU) but not at 1 AU. We also find a slightly different, from Bothmer & Schwenn [1998], empirical relationship for the perpendicular expansion. More importantly, we find no evidence that the CME undergoes a significant latitudinal over-expansion as it is commonly assumed

  2. Global modeling of soil evaporation efficiency for a chosen soil type

    NASA Astrophysics Data System (ADS)

    Georgiana Stefan, Vivien; Mangiarotti, Sylvain; Merlin, Olivier; Chanzy, André

    2016-04-01

    One way of reproducing the dynamics of a system is by deriving a set of differential, difference or discrete equations directly from observational time series. A method for obtaining such a system is the global modeling technique [1]. The approach is here applied to the dynamics of soil evaporative efficiency (SEE), defined as the ratio of actual to potential evaporation. SEE is an interesting variable to study since it is directly linked to soil evaporation (LE) which plays an important role in the water cycle and since it can be easily derived from satellite measurements. One goal of the present work is to get a semi-empirical parameter that could account for the variety of the SEE dynamical behaviors resulting from different soil properties. Before trying to obtain such a semi-empirical parameter with the global modeling technique, it is first necessary to prove that this technique can be applied to the dynamics of SEE without any a priori information. The global modeling technique is thus applied here to a synthetic series of SEE, reconstructed from the TEC (Transfert Eau Chaleur) model [2]. It is found that an autonomous chaotic model can be retrieved for the dynamics of SEE. The obtained model is four-dimensional and exhibits a complex behavior. The comparison of the original and the model phase portraits shows a very good consistency that proves that the original dynamical behavior is well described by the model. To evaluate the model accuracy, the forecasting error growth is estimated. To get a robust estimate of this error growth, the forecasting error is computed for prediction horizons of 0 to 9 hours, starting from different initial conditions and statistics of the error growth are thus performed. Results show that, for a maximum error level of 40% of the signal variance, the horizon of predictability is close to 3 hours, approximately one third of the diurnal part of day. These results are interesting for various reasons. To the best of our knowledge, it is the very first time that a chaotic model is obtained for the SEE. It also shows that the SEE dynamics can be approximated by a low-dimensional autonomous model. From a theoretical point of view, it is also interesting to note that only very few low-dimensional models could be directly obtained for environmental dynamics, and that four-dimensional models are even rarer. Since a model could be obtained for the SEE, it can be expected, now, to adapt the global modeling technique and to apply it to a range of different soil conditions in order to get a global model that would account for the variability of soil properties. [1] MANGIAROTTI S., COUDRET R., DRAPEAU L., JARLAN L. Polynomial search and global modeling: two algorithms for modeling chaos. Physical Review E, 86(4), 046205, 2012. [2] CHANZY A., MUMEN M., RICHARD G. Accuracy of the top soil moisture simulation using a mechanistic model with limited soil characterization. Water Resources Research, 44, W03432, 2008.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mou, J.I.; King, C.

    The focus of this study is to develop a sensor fused process modeling and control methodology to model, assess, and then enhance the performance of a hexapod machine for precision product realization. Deterministic modeling technique was used to derive models for machine performance assessment and enhancement. Sensor fusion methodology was adopted to identify the parameters of the derived models. Empirical models and computational algorithms were also derived and implemented to model, assess, and then enhance the machine performance. The developed sensor fusion algorithms can be implemented on a PC-based open architecture controller to receive information from various sensors, assess themore » status of the process, determine the proper action, and deliver the command to actuators for task execution. This will enhance a hexapod machine`s capability to produce workpieces within the imposed dimensional tolerances.« less

  4. Testing simulation and structural models with applications to energy demand

    NASA Astrophysics Data System (ADS)

    Wolff, Hendrik

    2007-12-01

    This dissertation deals with energy demand and consists of two parts. Part one proposes a unified econometric framework for modeling energy demand and examples illustrate the benefits of the technique by estimating the elasticity of substitution between energy and capital. Part two assesses the energy conservation policy of Daylight Saving Time and empirically tests the performance of electricity simulation. In particular, the chapter "Imposing Monotonicity and Curvature on Flexible Functional Forms" proposes an estimator for inference using structural models derived from economic theory. This is motivated by the fact that in many areas of economic analysis theory restricts the shape as well as other characteristics of functions used to represent economic constructs. Specific contributions are (a) to increase the computational speed and tractability of imposing regularity conditions, (b) to provide regularity preserving point estimates, (c) to avoid biases existent in previous applications, and (d) to illustrate the benefits of our approach via numerical simulation results. The chapter "Can We Close the Gap between the Empirical Model and Economic Theory" discusses the more fundamental question of whether the imposition of a particular theory to a dataset is justified. I propose a hypothesis test to examine whether the estimated empirical model is consistent with the assumed economic theory. Although the proposed methodology could be applied to a wide set of economic models, this is particularly relevant for estimating policy parameters that affect energy markets. This is demonstrated by estimating the Slutsky matrix and the elasticity of substitution between energy and capital, which are crucial parameters used in computable general equilibrium models analyzing energy demand and the impacts of environmental regulations. Using the Berndt and Wood dataset, I find that capital and energy are complements and that the data are significantly consistent with duality theory. Both results would not necessarily be achieved using standard econometric methods. The final chapter "Daylight Time and Energy" uses a quasi-experiment to evaluate a popular energy conservation policy: we challenge the conventional wisdom that extending Daylight Saving Time (DST) reduces energy demand. Using detailed panel data on half-hourly electricity consumption, prices, and weather conditions from four Australian states we employ a novel 'triple-difference' technique to test the electricity-saving hypothesis. We show that the extension failed to reduce electricity demand and instead increased electricity prices. We also apply the most sophisticated electricity simulation model available in the literature to the Australian data. We find that prior simulation models significantly overstate electricity savings. Our results suggest that extending DST will fail as an instrument to save energy resources.

  5. Volatility in financial markets: stochastic models and empirical results

    NASA Astrophysics Data System (ADS)

    Miccichè, Salvatore; Bonanno, Giovanni; Lillo, Fabrizio; Mantegna, Rosario N.

    2002-11-01

    We investigate the historical volatility of the 100 most capitalized stocks traded in US equity markets. An empirical probability density function (pdf) of volatility is obtained and compared with the theoretical predictions of a lognormal model and of the Hull and White model. The lognormal model well describes the pdf in the region of low values of volatility whereas the Hull and White model better approximates the empirical pdf for large values of volatility. Both models fail in describing the empirical pdf over a moderately large volatility range.

  6. A steady state model of agricultural waste pyrolysis: A mini review.

    PubMed

    Trninić, M; Jovović, A; Stojiljković, D

    2016-09-01

    Agricultural waste is one of the main renewable energy resources available, especially in an agricultural country such as Serbia. Pyrolysis has already been considered as an attractive alternative for disposal of agricultural waste, since the technique can convert this special biomass resource into granular charcoal, non-condensable gases and pyrolysis oils, which could furnish profitable energy and chemical products owing to their high calorific value. In this regard, the development of thermochemical processes requires a good understanding of pyrolysis mechanisms. Experimental and some literature data on the pyrolysis characteristics of corn cob and several other agricultural residues under inert atmosphere were structured and analysed in order to obtain conversion behaviour patterns of agricultural residues during pyrolysis within the temperature range from 300 °C to 1000 °C. Based on experimental and literature data analysis, empirical relationships were derived, including relations between the temperature of the process and yields of charcoal, tar and gas (CO2, CO, H2 and CH4). An analytical semi-empirical model was then used as a tool to analyse the general trends of biomass pyrolysis. Although this semi-empirical model needs further refinement before application to all types of biomass, its prediction capability was in good agreement with results obtained by the literature review. The compact representation could be used in other applications, to conveniently extrapolate and interpolate these results to other temperatures and biomass types. © The Author(s) 2016.

  7. Comparison of safety effect estimates obtained from empirical Bayes before-after study, propensity scores-potential outcomes framework, and regression model with cross-sectional data.

    PubMed

    Wood, Jonathan S; Donnell, Eric T; Porter, Richard J

    2015-02-01

    A variety of different study designs and analysis methods have been used to evaluate the performance of traffic safety countermeasures. The most common study designs and methods include observational before-after studies using the empirical Bayes method and cross-sectional studies using regression models. The propensity scores-potential outcomes framework has recently been proposed as an alternative traffic safety countermeasure evaluation method to address the challenges associated with selection biases that can be part of cross-sectional studies. Crash modification factors derived from the application of all three methods have not yet been compared. This paper compares the results of retrospective, observational evaluations of a traffic safety countermeasure using both before-after and cross-sectional study designs. The paper describes the strengths and limitations of each method, focusing primarily on how each addresses site selection bias, which is a common issue in observational safety studies. The Safety Edge paving technique, which seeks to mitigate crashes related to roadway departure events, is the countermeasure used in the present study to compare the alternative evaluation methods. The results indicated that all three methods yielded results that were consistent with each other and with previous research. The empirical Bayes results had the smallest standard errors. It is concluded that the propensity scores with potential outcomes framework is a viable alternative analysis method to the empirical Bayes before-after study. It should be considered whenever a before-after study is not possible or practical. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Geoid undulations and gravity anomalies over the Aral Sea, the Black Sea and the Caspian Sea from a combined GEOS-3/SEASAT/GEOSAT altimeter data set

    NASA Technical Reports Server (NTRS)

    Au, Andrew Y.; Brown, Richard D.; Welker, Jean E.

    1991-01-01

    Satellite-based altimetric data taken by GOES-3, SEASAT, and GEOSAT over the Aral Sea, the Black Sea, and the Caspian Sea are analyzed and a least squares collocation technique is used to predict the geoid undulations on a 0.25x0.25 deg. grid and to transform these geoid undulations to free air gravity anomalies. Rapp's 180x180 geopotential model is used as the reference surface for the collocation procedure. The result of geoid to gravity transformation is, however, sensitive to the information content of the reference geopotential model used. For example, considerable detailed surface gravity data were incorporated into the reference model over the Black Sea, resulting in a reference model with significant information content at short wavelengths. Thus, estimation of short wavelength gravity anomalies from gridded geoid heights is generally reliable over regions such as the Black Sea, using the conventional collocation technique with local empirical covariance functions. Over regions such as the Caspian Sea, where detailed surface data are generally not incorporated into the reference model, unconventional techniques are needed to obtain reliable gravity anomalies. Based on the predicted gravity anomalies over these inland seas, speculative tectonic structures are identified and geophysical processes are inferred.

  9. Transition mixing study empirical model report

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.; White, C.

    1988-01-01

    The empirical model developed in the NASA Dilution Jet Mixing Program has been extended to include the curvature effects of transition liners. This extension is based on the results of a 3-D numerical model generated under this contract. The empirical model results agree well with the numerical model results for all tests cases evaluated. The empirical model shows faster mixing rates compared to the numerical model. Both models show drift of jets toward the inner wall of a turning duct. The structure of the jets from the inner wall does not exhibit the familiar kidney-shaped structures observed for the outer wall jets or for jets injected in rectangular ducts.

  10. Reflective equilibrium and empirical data: third person moral experiences in empirical medical ethics.

    PubMed

    De Vries, Martine; Van Leeuwen, Evert

    2010-11-01

    In ethics, the use of empirical data has become more and more popular, leading to a distinct form of applied ethics, namely empirical ethics. This 'empirical turn' is especially visible in bioethics. There are various ways of combining empirical research and ethical reflection. In this paper we discuss the use of empirical data in a special form of Reflective Equilibrium (RE), namely the Network Model with Third Person Moral Experiences. In this model, the empirical data consist of the moral experiences of people in a practice. Although inclusion of these moral experiences in this specific model of RE can be well defended, their use in the application of the model still raises important questions. What precisely are moral experiences? How to determine relevance of experiences, in other words: should there be a selection of the moral experiences that are eventually used in the RE? How much weight should the empirical data have in the RE? And the key question: can the use of RE by empirical ethicists really produce answers to practical moral questions? In this paper we start to answer the above questions by giving examples taken from our research project on understanding the norm of informed consent in the field of pediatric oncology. We especially emphasize that incorporation of empirical data in a network model can reduce the risk of self-justification and bias and can increase the credibility of the RE reached. © 2009 Blackwell Publishing Ltd.

  11. Computerized modeling techniques predict the 3D structure of H₄R: facts and fiction.

    PubMed

    Zaid, Hilal; Ismael-Shanak, Siba; Michaeli, Amit; Rayan, Anwar

    2012-01-01

    The functional characterization of proteins presents a daily challenge r biochemical, medical and computational sciences, especially when the structures are undetermined empirically, as in the case of the Histamine H4 Receptor (H₄R). H₄R is a member of the GPCR superfamily that plays a vital role in immune and inflammatory responses. To date, the concept of GPCRs modeling is highlighted in textbooks and pharmaceutical pamphlets, and this group of proteins has been the subject of almost 3500 publications in the scientific literature. The dynamic nature of determining the GPCRs structure was elucidated through elegant and creative modeling methodologies, implemented by many groups around the world. H₄R which belongs to the GPCR family was cloned in 2000; understandably, its biological activity was reported only 65 times in pubmed. Here we attempt to cover the fundamental concepts of H₄R structure modeling and its implementation in drug discovery, especially those that have been experimentally tested and to highlight some ideas that are currently being discussed on the dynamic nature of H₄R and GPCRs computerized techniques for 3D structure modeling.

  12. Morphological evolution of an ephemeral tidal inlet from opening to closure: The Albufeira inlet, Portugal

    NASA Astrophysics Data System (ADS)

    Fortunato, André B.; Nahon, Alphonse; Dodet, Guillaume; Rita Pires, Ana; Conceição Freitas, Maria; Bruneau, Nicolas; Azevedo, Alberto; Bertin, Xavier; Benevides, Pedro; Andrade, César; Oliveira, Anabela

    2014-02-01

    Like other similar coastal systems, the Albufeira lagoon is artificially opened every year to promote water renewal and closes naturally within a few months. The evolution of the Albufeira Lagoon Inlet from its opening in April 2010 to its closure 8 months later is qualitatively and quantitatively analyzed through a combination of monthly field surveys and the application of a process-based morphodynamic model. Field data alone would not cover the whole space-time domain of the morphology of the inlet during its life time, whereas the morphodynamic model alone cannot reliably simulate the morphological development. Using a nudging technique introduced herein, this problem is overcome and a reliable and complete data set is generated for describing the morphological development of the tidal inlet. The new technique is shown to be a good alternative to extensive model calibration, as it can drastically improve the model performance. Results reveal that the lagoon imported sediments during its life span. However, the whole system (lagoon plus littoral barrier) actually lost sediments to the sea. This behavior is partly attributed to the modulation of tidal asymmetry by the spring-neap cycle, which reduces flood dominance on spring tides. Results also allowed the assessment of the relationship between the spring tidal prism and the cross-section of tidal inlets (the PA relationship). While this relationship is well established from empirical, theoretical and numerical evidences, its validity in inlets that are small or away from equilibrium was unclear. Results for the Albufeira lagoon reveal an excellent match between the new data and the empirical PA relationship derived for larger inlets and equilibrium conditions, supporting the validity of the relationship beyond its original scope.

  13. VLBI-derived troposphere parameters during CONT08

    NASA Astrophysics Data System (ADS)

    Heinkelmann, R.; Böhm, J.; Bolotin, S.; Engelhardt, G.; Haas, R.; Lanotte, R.; MacMillan, D. S.; Negusini, M.; Skurikhina, E.; Titov, O.; Schuh, H.

    2011-07-01

    Time-series of zenith wet and total troposphere delays as well as north and east gradients are compared, and zenith total delays ( ZTD) are combined on the level of parameter estimates. Input data sets are provided by ten Analysis Centers (ACs) of the International VLBI Service for Geodesy and Astrometry (IVS) for the CONT08 campaign (12-26 August 2008). The inconsistent usage of meteorological data and models, such as mapping functions, causes systematics among the ACs, and differing parameterizations and constraints add noise to the troposphere parameter estimates. The empirical standard deviation of ZTD among the ACs with regard to an unweighted mean is 4.6 mm. The ratio of the analysis noise to the observation noise assessed by the operator/software impact (OSI) model is about 2.5. These and other effects have to be accounted for to improve the intra-technique combination of VLBI-derived troposphere parameters. While the largest systematics caused by inconsistent usage of meteorological data can be avoided and the application of different mapping functions can be considered by applying empirical corrections, the noise has to be modeled in the stochastic model of intra-technique combination. The application of different stochastic models shows no significant effects on the combined parameters but results in different mean formal errors: the mean formal errors of the combined ZTD are 2.3 mm (unweighted), 4.4 mm (diagonal), 8.6 mm [variance component (VC) estimation], and 8.6 mm (operator/software impact, OSI). On the one hand, the OSI model, i.e. the inclusion of off-diagonal elements in the cofactor-matrix, considers the reapplication of observations yielding a factor of about two for mean formal errors as compared to the diagonal approach. On the other hand, the combination based on VC estimation shows large differences among the VCs and exhibits a comparable scaling of formal errors. Thus, for the combination of troposphere parameters a combination of the two extensions of the stochastic model is recommended.

  14. Use of a Monte Carlo technique to complete a fragmented set of H2S emission rates from a wastewater treatment plant.

    PubMed

    Schauberger, Günther; Piringer, Martin; Baumann-Stanzer, Kathrin; Knauder, Werner; Petz, Erwin

    2013-12-15

    The impact of ambient concentrations in the vicinity of a plant can only be assessed if the emission rate is known. In this study, based on measurements of ambient H2S concentrations and meteorological parameters, the a priori unknown emission rates of a tannery wastewater treatment plant are calculated by an inverse dispersion technique. The calculations are determined using the Gaussian Austrian regulatory dispersion model. Following this method, emission data can be obtained, though only for a measurement station that is positioned such that the wind direction at the measurement station is leeward of the plant. Using the inverse transform sampling, which is a Monte Carlo technique, the dataset can also be completed for those wind directions for which no ambient concentration measurements are available. For the model validation, the measured ambient concentrations are compared with the calculated ambient concentrations obtained from the synthetic emission data of the Monte Carlo model. The cumulative frequency distribution of this new dataset agrees well with the empirical data. This inverse transform sampling method is thus a useful supplement for calculating emission rates using the inverse dispersion technique. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Cognitive hypnotherapy: a new vision and strategy for research and practice.

    PubMed

    Alladin, Assen

    2012-04-01

    This article describes cognitive hypnotherapy (CH), a visionary model of adjunctive hypnotherapy that advances the role of clinical hypnosis to a recognized integrative model of psychotherapy. As hypnosis lacks a coherent theory of psychotherapy and behavior change, hypnotherapy has embodied a mixed bag of techniques and thus hindered from transfiguring into a mainstream school of psychotherapy. One way of promoting the therapeutic standing of hypnotherapy as an adjunctive therapy is to systematically integrate it with a well-established psychotherapy. By blending hypnotherapy with cognitive behavior therapy, CH offers a unified version of clinical practice that fits the assimilative model of integrated psychotherapy, which represents the best integrative psychotherapy approach for merging both theory and empirical findings.

  16. Passive Super-Low Frequency electromagnetic prospecting technique

    NASA Astrophysics Data System (ADS)

    Wang, Nan; Zhao, Shanshan; Hui, Jian; Qin, Qiming

    2017-03-01

    The Super-Low Frequency (SLF) electromagnetic prospecting technique, adopted as a non-imaging remote sensing tool for depth sounding, is systematically proposed for subsurface geological survey. In this paper, we propose and theoretically illustrate natural source magnetic amplitudes as SLF responses for the first step. In order to directly calculate multi-dimensional theoretical SLF responses, modeling algorithms were developed and evaluated using the finite difference method. The theoretical results of three-dimensional (3-D) models show that the average normalized SLF magnetic amplitude responses were numerically stable and appropriate for practical interpretation. To explore the depth resolution, three-layer models were configured. The modeling results prove that the SLF technique is more sensitive to conductive objective layers than high resistive ones, with the SLF responses of conductive objective layers obviously showing uprising amplitudes in the low frequency range. Afterwards, we proposed an improved Frequency-Depth transformation based on Bostick inversion to realize the depth sounding by empirically adjusting two parameters. The SLF technique has already been successfully applied in geothermal exploration and coalbed methane (CBM) reservoir interpretation, which demonstrates that the proposed methodology is effective in revealing low resistive distributions. Furthermore, it siginificantly contributes to reservoir identification with electromagnetic radiation anomaly extraction. Meanwhile, the SLF interpretation results are in accordance with dynamic production status of CBM reservoirs, which means it could provide an economical, convenient and promising method for exploring and monitoring subsurface geo-objects.

  17. School Climate Research

    ERIC Educational Resources Information Center

    Thapa, Amrit

    2013-01-01

    School climate research is clearly evolving. The field demands rigorous and empirically sound research that focuses on relating specific aspects and activities of interventions to changes in specific components of school climate. We also need empirical evidence based on sound research techniques on how both interventions and climate affect…

  18. Predictive Modeling and Optimization of Vibration-assisted AFM Tip-based Nanomachining

    NASA Astrophysics Data System (ADS)

    Kong, Xiangcheng

    The tip-based vibration-assisted nanomachining process offers a low-cost, low-effort technique in fabricating nanometer scale 2D/3D structures in sub-100 nm regime. To understand its mechanism, as well as provide the guidelines for process planning and optimization, we have systematically studied this nanomachining technique in this work. To understand the mechanism of this nanomachining technique, we firstly analyzed the interaction between the AFM tip and the workpiece surface during the machining process. A 3D voxel-based numerical algorithm has been developed to calculate the material removal rate as well as the contact area between the AFM tip and the workpiece surface. As a critical factor to understand the mechanism of this nanomachining process, the cutting force has been analyzed and modeled. A semi-empirical model has been proposed by correlating the cutting force with the material removal rate, which was validated using experimental data from different machining conditions. With the understanding of its mechanism, we have developed guidelines for process planning of this nanomachining technique. To provide the guideline for parameter selection, the effect of machining parameters on the feature dimensions (depth and width) has been analyzed. Based on ANOVA test results, the feature width is only controlled by the XY vibration amplitude, while the feature depth is affected by several machining parameters such as setpoint force and feed rate. A semi-empirical model was first proposed to predict the machined feature depth under given machining condition. Then, to reduce the computation intensity, linear and nonlinear regression models were also proposed and validated using experimental data. Given the desired feature dimensions, feasible machining parameters could be provided using these predictive feature dimension models. As the tip wear is unavoidable during the machining process, the machining precision will gradually decrease. To maintain the machining quality, the guideline for when to change the tip should be provided. In this study, we have developed several metrics to detect tip wear, such as tip radius and the pull-off force. The effect of machining parameters on the tip wear rate has been studied using these metrics, and the machining distance before a tip must be changed has been modeled using these machining parameters. Finally, the optimization functions have been built for unit production time and unit production cost subject to realistic constraints, and the optimal machining parameters can be found by solving these functions.

  19. Evaluation of a 40 to 1 scale model of a low pressure engine

    NASA Technical Reports Server (NTRS)

    Cooper, C. E., Jr.; Thoenes, J.

    1972-01-01

    An evaluation of a scale model of a low pressure rocket engine which is used for secondary injection studies was conducted. Specific objectives of the evaluation were to: (1) assess the test conditions required for full scale simulations; (2) recommend fluids to be used for both primary and secondary flows; and (3) recommend possible modifications to be made to the scale model and its test facility to achieve the highest possible degree of simulation. A discussion of the theoretical and empirical scaling laws which must be observed to apply scale model test data to full scale systems is included. A technique by which the side forces due to secondary injection can be analytically estimated is presented.

  20. Factors influencing the consumption of alcohol and tobacco: the use and abuse of economic models.

    PubMed

    Godfrey, C

    1989-10-01

    This paper is concerned with the use of economic models in the debate about the role that tax increases and restrictions on advertising should play in reducing the health problems that arise from the consumption of alcohol and tobacco. It is argued that properly specified demand models that take account of all the important factors that influence consumption are required, otherwise inadequate modelling may lead to misleading estimates of the effects of policy changes. The ability of economics to deal with goods such as alcohol and tobacco that have addictive characteristics receives special attention. Recent advances in economic theory, estimation techniques and statistical testing are discussed, as is the problem of identifying policy recommendations from empirical results.

  1. Development of Specialization Scales for the MSPI: A Comparison of Empirical and Inductive Strategies

    ERIC Educational Resources Information Center

    Porfeli, Erik J.; Richard, George V.; Savickas, Mark L.

    2010-01-01

    An empirical measurement model for interest inventory construction uses internal criteria whereas an inductive measurement model uses external criteria. The empirical and inductive measurement models are compared and contrasted and then two models are assessed through tests of the effectiveness and economy of scales for the Medical Specialty…

  2. An Empirical Study of Re-sampling Techniques as a Method for Improving Error Estimates in Split-plot Designs

    DTIC Science & Technology

    2010-03-01

    sufficient replications often lead to models that lack precision in error estimation and thus imprecision in corresponding conclusions. This work develops...v Preface This work is dedicated to all who gave and continue to give in order for me to achieve some semblance of success. Benjamin M. Lee vi...develop, examine and test methodologies for an- alyzing test results from split-plot designs. In particular, this work determines the applicability

  3. Zinc ascorbate: a combined experimental and computational study for structure elucidation

    NASA Astrophysics Data System (ADS)

    Ünaleroǧlu, C.; Zümreoǧlu-Karan, B.; Mert, Y.

    2002-03-01

    The structure of Zn(HA)2·4H2O (HA=ascorbate) has been examined by a number of techniques (13C NMR, 1H NMR, IR, EI/MS and TGA) and also modeled by the semi-empirical PM3 method. The experimental and computational results agreed on a five-fold coordination around Zn(II) where one ascorbate binds monodentately, the other bidentately and two water molecules occupy the remaining sites of a distorted square pyramid.

  4. New approaches in agent-based modeling of complex financial systems

    NASA Astrophysics Data System (ADS)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2017-12-01

    Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.

  5. Model robustness as a confirmatory virtue: The case of climate science.

    PubMed

    Lloyd, Elisabeth A

    2015-02-01

    I propose a distinct type of robustness, which I suggest can support a confirmatory role in scientific reasoning, contrary to the usual philosophical claims. In model robustness, repeated production of the empirically successful model prediction or retrodiction against a background of independently-supported and varying model constructions, within a group of models containing a shared causal factor, may suggest how confident we can be in the causal factor and predictions/retrodictions, especially once supported by a variety of evidence framework. I present climate models of greenhouse gas global warming of the 20th Century as an example, and emphasize climate scientists' discussions of robust models and causal aspects. The account is intended as applicable to a broad array of sciences that use complex modeling techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Bridging process-based and empirical approaches to modeling tree growth

    Treesearch

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  7. Quantum optimization for training support vector machines.

    PubMed

    Anguita, Davide; Ridella, Sandro; Rivieccio, Fabio; Zunino, Rodolfo

    2003-01-01

    Refined concepts, such as Rademacher estimates of model complexity and nonlinear criteria for weighting empirical classification errors, represent recent and promising approaches to characterize the generalization ability of Support Vector Machines (SVMs). The advantages of those techniques lie in both improving the SVM representation ability and yielding tighter generalization bounds. On the other hand, they often make Quadratic-Programming algorithms no longer applicable, and SVM training cannot benefit from efficient, specialized optimization techniques. The paper considers the application of Quantum Computing to solve the problem of effective SVM training, especially in the case of digital implementations. The presented research compares the behavioral aspects of conventional and enhanced SVMs; experiments in both a synthetic and real-world problems support the theoretical analysis. At the same time, the related differences between Quadratic-Programming and Quantum-based optimization techniques are considered.

  8. Comparison of Bayesian clustering and edge detection methods for inferring boundaries in landscape genetics

    USGS Publications Warehouse

    Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.

    2011-01-01

    Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.

  9. Feasibility of Active Machine Learning for Multiclass Compound Classification.

    PubMed

    Lang, Tobias; Flachsenberg, Florian; von Luxburg, Ulrike; Rarey, Matthias

    2016-01-25

    A common task in the hit-to-lead process is classifying sets of compounds into multiple, usually structural classes, which build the groundwork for subsequent SAR studies. Machine learning techniques can be used to automate this process by learning classification models from training compounds of each class. Gathering class information for compounds can be cost-intensive as the required data needs to be provided by human experts or experiments. This paper studies whether active machine learning can be used to reduce the required number of training compounds. Active learning is a machine learning method which processes class label data in an iterative fashion. It has gained much attention in a broad range of application areas. In this paper, an active learning method for multiclass compound classification is proposed. This method selects informative training compounds so as to optimally support the learning progress. The combination with human feedback leads to a semiautomated interactive multiclass classification procedure. This method was investigated empirically on 15 compound classification tasks containing 86-2870 compounds in 3-38 classes. The empirical results show that active learning can solve these classification tasks using 10-80% of the data which would be necessary for standard learning techniques.

  10. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  11. Combining DSMC Simulations and ROSINA/COPS Data of Comet 67P/Churyumov-Gerasimenko to Develop a Realistic Empirical Coma Model and to Determine Accurate Production Rates

    NASA Astrophysics Data System (ADS)

    Hansen, K. C.; Fougere, N.; Bieler, A. M.; Altwegg, K.; Combi, M. R.; Gombosi, T. I.; Huang, Z.; Rubin, M.; Tenishev, V.; Toth, G.; Tzou, C. Y.

    2015-12-01

    We have previously published results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model and its characterization of the neutral coma of comet 67P/Churyumov-Gerasimenko through detailed comparison with data collected by the ROSINA/COPS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/COmet Pressure Sensor) instrument aboard the Rosetta spacecraft [Bieler, 2015]. Results from these DSMC models have been used to create an empirical model of the near comet coma (<200 km) of comet 67P. The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. The model is a significant improvement over more simple empirical models, such as the Haser model. While the DSMC results are a more accurate representation of the coma at any given time, the advantage of a mean state, empirical model is the ease and speed of use. One use of such an empirical model is in the calculation of a total cometary coma production rate from the ROSINA/COPS data. The COPS data are in situ measurements of gas density and velocity along the ROSETTA spacecraft track. Converting the measured neutral density into a production rate requires knowledge of the neutral gas distribution in the coma. Our empirical model provides this information and therefore allows us to correct for the spacecraft location to calculate a production rate as a function of heliocentric distance. We will present the full empirical model as well as the calculated neutral production rate for the period of August 2014 - August 2015 (perihelion).

  12. Empirical evaluation of the market price of risk using the CIR model

    NASA Astrophysics Data System (ADS)

    Bernaschi, M.; Torosantucci, L.; Uboldi, A.

    2007-03-01

    We describe a simple but effective method for the estimation of the market price of risk. The basic idea is to compare the results obtained by following two different approaches in the application of the Cox-Ingersoll-Ross (CIR) model. In the first case, we apply the non-linear least squares method to cross sectional data (i.e., all rates of a single day). In the second case, we consider the short rate obtained by means of the first procedure as a proxy of the real market short rate. Starting from this new proxy, we evaluate the parameters of the CIR model by means of martingale estimation techniques. The estimate of the market price of risk is provided by comparing results obtained with these two techniques, since this approach makes possible to isolate the market price of risk and evaluate, under the Local Expectations Hypothesis, the risk premium given by the market for different maturities. As a test case, we apply the method to data of the European Fixed Income Market.

  13. Learning by Peers: An Alternative Learning Model for Digital Inclusion of Elderly People

    NASA Astrophysics Data System (ADS)

    de Sales, Márcia Barros; Silveira, Ricardo Azambuja; de Sales, André Barros; de Cássia Guarezi, Rita

    This paper presents a model of digital inclusion for the elderly people, using learning by peers methodology. The model’s goal was valuing and promoting the potential capabilities of the elderly people by promoting some of them to instruct other elderly people to deal with computers and to use several software tools and internet services. The project involved 66 volunteering elderly people. However, 19 of them acted effectively as multipliers and the others as students. The process was observed through the empirical technique of interaction workshops. This technique was chosen for demanding direct participation of the people involved in real interaction. We worked with peer learning to facilitate the communication between elderly-learners and elderly-multipliers, due to the similarity in language, rhythm and life history, and because they felt more secure to develop the activities with people in their age group. This multiplying model can be used in centers, organizations and other entities that work with elderly people for their digital inclusion.

  14. A Fast Smoothing Algorithm for Post-Processing of Surface Reflectance Spectra Retrieved from Airborne Imaging Spectrometer Data

    PubMed Central

    Gao, Bo-Cai; Liu, Ming

    2013-01-01

    Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022

  15. Conference on the Ionosphere and Radio Wave Propagation, 3rd, University of Sydney, Australia, February 11-15, 1985, Proceedings

    NASA Astrophysics Data System (ADS)

    Cole, D. G.; McNamara, L. F.

    1985-12-01

    Various papers on the ionosphere and radio wave propagation are presented. The subjects discussed include: day-to-day variability in foF2 at low latitudes over a solar cycle; semiempirical, low-latitude ionospheric model; remote sensing with the Jindalee skywave radar; photographic approach to irregularities in the 80-100 km region; interference of radio waves in a CW system; study of the F-region characteristics at Waltair; recent developments in the international reference ionosphere; research-oriented ionosonde with directional capabilities; and ionospheric forecasting for specific applications. Also addressed are: experimental and theoretical techniques for the equatorial F region; empirical models of ionospheric electron concentration; the Jindalee ionospheric sounding system; a semiempirical midlatitude ionospheric model; Es structure using an HF radar; short-term variations in f0F2 and IEC; nonreciprocity in Omega propagation observed at middle latitudes; propagation management for no acknowledge HF links; new techniques in ionospheric sounding and studies; and lunar effects in the ionospheric F region.

  16. Empirical modelling to predict the refractive index of human blood.

    PubMed

    Yahya, M; Saghir, M Z

    2016-02-21

    Optical techniques used for the measurement of the optical properties of blood are of great interest in clinical diagnostics. Blood analysis is a routine procedure used in medical diagnostics to confirm a patient's condition. Measuring the optical properties of blood is difficult due to the non-homogenous nature of the blood itself. In addition, there is a lot of variation in the refractive indices reported in the literature. These are the reasons that motivated the researchers to develop a mathematical model that can be used to predict the refractive index of human blood as a function of concentration, temperature and wavelength. The experimental measurements were conducted on mimicking phantom hemoglobin samples using the Abbemat Refractometer. The results analysis revealed a linear relationship between the refractive index and concentration as well as temperature, and a non-linear relationship between refractive index and wavelength. These results are in agreement with those found in the literature. In addition, a new formula was developed based on empirical modelling which suggests that temperature and wavelength coefficients be added to the Barer formula. The verification of this correlation confirmed its ability to determine refractive index and/or blood hematocrit values with appropriate clinical accuracy.

  17. Transcranial direct current stimulation in obsessive-compulsive disorder: emerging clinical evidence and considerations for optimal montage of electrodes.

    PubMed

    Senço, Natasha M; Huang, Yu; D'Urso, Giordano; Parra, Lucas C; Bikson, Marom; Mantovani, Antonio; Shavitt, Roseli G; Hoexter, Marcelo Q; Miguel, Eurípedes C; Brunoni, André R

    2015-07-01

    Neuromodulation techniques for obsessive-compulsive disorder (OCD) treatment have expanded with greater understanding of the brain circuits involved. Transcranial direct current stimulation (tDCS) might be a potential new treatment for OCD, although the optimal montage is unclear. To perform a systematic review on meta-analyses of repetitive transcranianal magnetic stimulation (rTMS) and deep brain stimulation (DBS) trials for OCD, aiming to identify brain stimulation targets for future tDCS trials and to support the empirical evidence with computer head modeling analysis. Systematic reviews of rTMS and DBS trials on OCD in Pubmed/MEDLINE were searched. For the tDCS computational analysis, we employed head models with the goal of optimally targeting current delivery to structures of interest. Only three references matched our eligibility criteria. We simulated four different electrodes montages and analyzed current direction and intensity. Although DBS, rTMS and tDCS are not directly comparable and our theoretical model, based on DBS and rTMS targets, needs empirical validation, we found that the tDCS montage with the cathode over the pre-supplementary motor area and extra-cephalic anode seems to activate most of the areas related to OCD.

  18. Road vehicle emission factors development: A review

    NASA Astrophysics Data System (ADS)

    Franco, Vicente; Kousoulidou, Marina; Muntean, Marilena; Ntziachristos, Leonidas; Hausberger, Stefan; Dilara, Panagiota

    2013-05-01

    Pollutant emissions need to be accurately estimated to ensure that air quality plans are designed and implemented appropriately. Emission factors (EFs) are empirical functional relations between pollutant emissions and the activity that causes them. In this review article, the techniques used to measure road vehicle emissions are examined in relation to the development of EFs found in emission models used to produce emission inventories. The emission measurement techniques covered include those most widely used for road vehicle emissions data collection, namely chassis and engine dynamometer measurements, remote sensing, road tunnel studies and portable emission measurements systems (PEMS). The main advantages and disadvantages of each method with regards to emissions modelling are presented. A review of the ways in which EFs may be derived from test data is also performed, with a clear distinction between data obtained under controlled conditions (engine and chassis dynamometer measurements using standard driving cycles) and measurements under real-world operation.

  19. OPEC behavior

    NASA Astrophysics Data System (ADS)

    Yang, Bo

    This thesis aims to contribute to a further understanding of the real dynamics of OPEC production behavior and its impacts on the world oil market. A literature review in this area shows that the existing studies on OPEC still have some major deficiencies in theoretical interpretation and empirical estimation technique. After a brief background review in chapter 1, chapter 2 tests Griffin's market-sharing cartel model on the post-Griffin time horizon with a simultaneous system of equations, and an innovative hypothesis of OPEC's behavior (Saudi Arabia in particular) is then proposed based on the estimation results. Chapter 3 first provides a conceptual analysis of OPEC behavior under the framework of non-cooperative collusion with imperfect information. An empirical model is then constructed and estimated. The results of the empirical studies in this thesis strongly support the hypothesis that OPEC has operated as a market-sharing cartel since the early 1980s. In addition, the results also provide some support of the theory of non-cooperative collusion under imperfect information. OPEC members collude under normal circumstances and behave competitively at times in response to imperfect market signals of cartel compliance and some internal attributes. Periodic joint competition conduct plays an important role in sustaining the collusion in the long run. Saudi Arabia acts as the leader of the cartel, accommodating intermediate unfavorable market development and punishing others with a tit-for-tat strategy in extreme circumstances.

  20. Paradox as a Therapeutic Technique: A Review

    ERIC Educational Resources Information Center

    Soper, Patricia H.; L'Abate Luciano

    1977-01-01

    The increasing use of paradoxical messages and injunctions in marital and familial therapies is reviewed. The theoretical, empirical, and clinical grounds for this practice, on the basis of this review, are still incomplete and questionable. The need for empirical research in this area is still great. (Author)

  1. Real time closed loop control of an Ar and Ar/O2 plasma in an ICP

    NASA Astrophysics Data System (ADS)

    Faulkner, R.; Soberón, F.; McCarter, A.; Gahan, D.; Karkari, S.; Milosavljevic, V.; Hayden, C.; Islyaikin, A.; Law, V. J.; Hopkins, M. B.; Keville, B.; Iordanov, P.; Doherty, S.; Ringwood, J. V.

    2006-10-01

    Real time closed loop control for plasma assisted semiconductor manufacturing has been the subject of academic research for over a decade. However, due to process complexity and the lack of suitable real time metrology, progress has been elusive and genuine real time, multi-input, multi-output (MIMO) control of a plasma assisted process has yet to be successfully implemented in an industrial setting. A Splasma parameter control strategy T is required to be adopted whereby process recipes which are defined in terms of plasma properties such as critical species densities as opposed to input variables such as rf power and gas flow rates may be transferable between different chamber types. While PIC simulations and multidimensional fluid models have contributed considerably to the basic understanding of plasmas and the design of process equipment, such models require a large amount of processing time and are hence unsuitable for testing control algorithms. In contrast, linear dynamical empirical models, obtained through system identification techniques are ideal in some respects for control design since their computational requirements are comparatively small and their structure facilitates the application of classical control design techniques. However, such models provide little process insight and are specific to an operating point of a particular machine. An ideal first principles-based, control-oriented model would exhibit the simplicity and computational requirements of an empirical model and, in addition, despite sacrificing first principles detail, capture enough of the essential physics and chemistry of the process in order to provide reasonably accurate qualitative predictions. This paper will discuss the development of such a first-principles based, control-oriented model of a laboratory inductively coupled plasma chamber. The model consists of a global model of the chemical kinetics coupled to an analytical model of power deposition. Dynamics of actuators including mass flow controllers and exhaust throttle are included and sensor characteristics are also modelled. The application of this control-oriented model to achieve multivariable closed loop control of specific species e.g. atomic Oxygen and ion density using the actuators rf power, Oxygen and Argon flow rates, and pressure/exhaust flow rate in an Ar/O2 ICP plasma will be presented.

  2. Shape modeling with family of Pearson distributions: Langmuir waves

    NASA Astrophysics Data System (ADS)

    Vidojevic, Sonja

    2014-10-01

    Two major effects of Langmuir wave electric field influence on spectral line shapes are appearance of depressions shifted from unperturbed line and an additional dynamical line broadening. More realistic and accurate models of Langmuir waves are needed to study these effects with more confidence. In this article we present distribution shapes of a high-quality data set of Langmuir waves electric field observed by the WIND satellite. Using well developed numerical techniques, the distributions of the empirical measurements are modeled by family of Pearson distributions. The results suggest that the existing theoretical models of energy conversion between an electron beam and surrounding plasma is more complex. If the processes of the Langmuir wave generation are better understood, the influence of Langmuir waves on spectral line shapes could be modeled better.

  3. Effects of Material Degradation on the Structural Integrity of Composite Materials: Experimental Investigation and Modeling of High Temperature Degradation Mechanisms

    NASA Technical Reports Server (NTRS)

    Cunningham, Ronan A.; McManus, Hugh L.

    1996-01-01

    It has previously been demonstrated that simple coupled reaction-diffusion models can approximate the aging behavior of PMR-15 resin subjected to different oxidative environments. Based on empirically observed phenomena, a model coupling chemical reactions, both thermal and oxidative, with diffusion of oxygen into the material bulk should allow simulation of the aging process. Through preliminary modeling techniques such as this it has become apparent that accurate analytical models cannot be created until the phenomena which cause the aging of these materials are quantified. An experimental program is currently underway to quantify all of the reaction/diffusion related mechanisms involved. The following contains a summary of the experimental data which has been collected through thermogravimetric analyses of neat PMR-15 resin, along with analytical predictions from models based on the empirical data. Thermogravimetric analyses were carried out in a number of different environments - nitrogen, air and oxygen. The nitrogen provides data for the purely thermal degradation mechanisms while those in air provide data for the coupled oxidative-thermal process. The intent here is to effectively subtract the nitrogen atmosphere data (assumed to represent only thermal reactions) from the air and oxygen atmosphere data to back-figure the purely oxidative reactions. Once purely oxidative (concentration dependent) reactions have been quantified it should then be possible to quantify the diffusion of oxygen into the material bulk.

  4. Solving the problem of building models of crosslinked polymers: an example focussing on validation of the properties of crosslinked epoxy resins.

    PubMed

    Hall, Stephen A; Howlin, Brendan J; Hamerton, Ian; Baidak, Alex; Billaud, Claude; Ward, Steven

    2012-01-01

    The construction of molecular models of crosslinked polymers is an area of some difficulty and considerable interest. We report here a new method of constructing these models and validate the method by modelling three epoxy systems based on the epoxy monomers bisphenol F diglycidyl ether (BFDGE) and triglycidyl-p-amino phenol (TGAP) with the curing agent diamino diphenyl sulphone (DDS). The main emphasis of the work concerns the improvement of the techniques for the molecular simulation of these epoxies and specific attention is paid towards model construction techniques, including automated model building and prediction of glass transition temperatures (T(g)). Typical models comprise some 4200-4600 atoms (ca. 120-130 monomers). In a parallel empirical study, these systems have been cast, cured and analysed by dynamic mechanical thermal analysis (DMTA) to measure T(g). Results for the three epoxy systems yield good agreement with experimental T(g) ranges of 200-220°C, 270-285°C and 285-290°C with corresponding simulated ranges of 210-230°C, 250-300°C, and 250-300°C respectively.

  5. Solving the Problem of Building Models of Crosslinked Polymers: An Example Focussing on Validation of the Properties of Crosslinked Epoxy Resins

    PubMed Central

    Hall, Stephen A.; Howlin, Brendan J; Hamerton, Ian; Baidak, Alex; Billaud, Claude; Ward, Steven

    2012-01-01

    The construction of molecular models of crosslinked polymers is an area of some difficulty and considerable interest. We report here a new method of constructing these models and validate the method by modelling three epoxy systems based on the epoxy monomers bisphenol F diglycidyl ether (BFDGE) and triglycidyl-p-amino phenol (TGAP) with the curing agent diamino diphenyl sulphone (DDS). The main emphasis of the work concerns the improvement of the techniques for the molecular simulation of these epoxies and specific attention is paid towards model construction techniques, including automated model building and prediction of glass transition temperatures (Tg). Typical models comprise some 4200–4600 atoms (ca. 120–130 monomers). In a parallel empirical study, these systems have been cast, cured and analysed by dynamic mechanical thermal analysis (DMTA) to measure Tg. Results for the three epoxy systems yield good agreement with experimental Tg ranges of 200–220°C, 270–285°C and 285–290°C with corresponding simulated ranges of 210–230°C, 250–300°C, and 250–300°C respectively. PMID:22916182

  6. Generic Sensor Failure Modeling for Cooperative Systems.

    PubMed

    Jäger, Georg; Zug, Sebastian; Casimiro, António

    2018-03-20

    The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application's fault tolerance and thereby promises maintainability of such system's safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques.

  7. Generic Sensor Failure Modeling for Cooperative Systems

    PubMed Central

    Jäger, Georg; Zug, Sebastian

    2018-01-01

    The advent of cooperative systems entails a dynamic composition of their components. As this contrasts current, statically composed systems, new approaches for maintaining their safety are required. In that endeavor, we propose an integration step that evaluates the failure model of shared information in relation to an application’s fault tolerance and thereby promises maintainability of such system’s safety. However, it also poses new requirements on failure models, which are not fulfilled by state-of-the-art approaches. Consequently, this work presents a mathematically defined generic failure model as well as a processing chain for automatically extracting such failure models from empirical data. By examining data of an Sharp GP2D12 distance sensor, we show that the generic failure model not only fulfills the predefined requirements, but also models failure characteristics appropriately when compared to traditional techniques. PMID:29558435

  8. Non-randomized response model for sensitive survey with noncompliance.

    PubMed

    Wu, Qin; Tang, Man-Lai

    2016-12-01

    Collecting representative data on sensitive issues has long been problematic and challenging in public health prevalence investigation (e.g. non-suicidal self-injury), medical research (e.g. drug habits), social issue studies (e.g. history of child abuse), and their interdisciplinary studies (e.g. premarital sexual intercourse). Alternative data collection techniques that can be adopted to study sensitive questions validly become more important and necessary. As an alternative to the famous Warner randomized response model, non-randomized response triangular model has recently been developed to encourage participants to provide truthful responses in surveys involving sensitive questions. Unfortunately, both randomized and non-randomized response models could underestimate the proportion of subjects with the sensitive characteristic as some respondents do not believe that these techniques can protect their anonymity. As a result, some authors hypothesized that lack of trust and noncompliance should be highest among those who have the most to lose and the least to use for the anonymity provided by using these techniques. Some researchers noticed the existence of noncompliance and proposed new models to measure noncompliance in order to get reliable information. However, all proposed methods were based on randomized response models which require randomizing devices, restrict the survey to only face-to-face interview and are lack of reproductivity. Taking the noncompliance into consideration, we introduce new non-randomized response techniques in which no covariate is required. Asymptotic properties of the proposed estimates for sensitive characteristic as well as noncompliance probabilities are developed. Our proposed techniques are empirically shown to yield accurate estimates for both sensitive and noncompliance probabilities. A real example about premarital sex among university students is used to demonstrate our methodologies. © The Author(s) 2014.

  9. A novel method for tracing the movement of multiple individual soil particles under rainfall conditions using florescent videography.

    NASA Astrophysics Data System (ADS)

    Hardy, Robert; Pates, Jackie; Quinton, John

    2016-04-01

    The importance of developing new techniques to study soil movement cannot be underestimated especially those that integrate new technology. Currently there are limited empirical data available about the movement of individual soil particles, particularly high quality time-resolved data. Here we present a new technique which allows multiple individual soil particles to be traced in real time under simulated rainfall conditions. The technique utilises fluorescent videography in combination with a fluorescent soil tracer, which is based on natural particles. The system has been successfully used on particles greater than ~130 micrometres diameter. The technique uses HD video shot at 50 frames per second, providing extremely high temporal (0.02 s) and spatial resolution (sub-millimetre) of a particle's location without the need to perturb the system. Once the tracer has been filmed then the images are processed and analysed using a particle analysis and visualisation toolkit written in python. The toolkit enables the creation of 2 and 3-D time-resolved graphs showing the location of 1 or more particles. Quantitative numerical analysis of a pathway (or collection of pathways) is also possible, allowing parameters such as particle speed and displacement to be assessed. Filming the particles removes the need to destructively sample material and has many side-benefits, reducing the time, money and effort expended in the collection, transport and laboratory analysis of soils, while delivering data in a digital form which is perfect for modern computer-driven analysis techniques. There are many potential applications for the technique. High resolution empirical data on how soil particles move could be used to create, parameterise and evaluate soil movement models, particularly those that use the movement of individual particles. As data can be collected while rainfall is occurring it may offer the ability to study systems under dynamic conditions(rather than rainfall of a constant intensity), which are more realistic and this was one motivations behind the development of this technique.

  10. Molecular Modeling of Nucleic Acid Structure: Electrostatics and Solvation

    PubMed Central

    Bergonzo, Christina; Galindo-Murillo, Rodrigo; Cheatham, Thomas E.

    2014-01-01

    This unit presents an overview of computer simulation techniques as applied to nucleic acid systems, ranging from simple in vacuo molecular modeling techniques to more complete all-atom molecular dynamics treatments that include an explicit representation of the environment. The third in a series of four units, this unit focuses on critical issues in solvation and the treatment of electrostatics. UNITS 7.5 & 7.8 introduced the modeling of nucleic acid structure at the molecular level. This included a discussion of how to generate an initial model, how to evaluate the utility or reliability of a given model, and ultimately how to manipulate this model to better understand the structure, dynamics, and interactions. Subject to an appropriate representation of the energy, such as a specifically parameterized empirical force field, the techniques of minimization and Monte Carlo simulation, as well as molecular dynamics (MD) methods, were introduced as means to sample conformational space for a better understanding of the relevance of a given model. From this discussion, the major limitations with modeling, in general, were highlighted. These are the difficult issues in sampling conformational space effectively—the multiple minima or conformational sampling problems—and accurately representing the underlying energy of interaction. In order to provide a realistic model of the underlying energetics for nucleic acids in their native environments, it is crucial to include some representation of solvation (by water) and also to properly treat the electrostatic interactions. These are discussed in detail in this unit. PMID:18428877

  11. Molecular modeling of nucleic Acid structure: electrostatics and solvation.

    PubMed

    Bergonzo, Christina; Galindo-Murillo, Rodrigo; Cheatham, Thomas E

    2014-12-19

    This unit presents an overview of computer simulation techniques as applied to nucleic acid systems, ranging from simple in vacuo molecular modeling techniques to more complete all-atom molecular dynamics treatments that include an explicit representation of the environment. The third in a series of four units, this unit focuses on critical issues in solvation and the treatment of electrostatics. UNITS 7.5 & 7.8 introduced the modeling of nucleic acid structure at the molecular level. This included a discussion of how to generate an initial model, how to evaluate the utility or reliability of a given model, and ultimately how to manipulate this model to better understand its structure, dynamics, and interactions. Subject to an appropriate representation of the energy, such as a specifically parameterized empirical force field, the techniques of minimization and Monte Carlo simulation, as well as molecular dynamics (MD) methods, were introduced as a way of sampling conformational space for a better understanding of the relevance of a given model. This discussion highlighted the major limitations with modeling in general. When sampling conformational space effectively, difficult issues are encountered, such as multiple minima or conformational sampling problems, and accurately representing the underlying energy of interaction. In order to provide a realistic model of the underlying energetics for nucleic acids in their native environments, it is crucial to include some representation of solvation (by water) and also to properly treat the electrostatic interactions. These subjects are discussed in detail in this unit. Copyright © 2014 John Wiley & Sons, Inc.

  12. Testing an empirically derived mental health training model featuring small groups, distributed practice and patient discussion.

    PubMed

    Murrihy, Rachael C; Byrne, Mitchell K; Gonsalvez, Craig J

    2009-02-01

    Internationally, family doctors seeking to enhance their skills in evidence-based mental health treatment are attending brief training workshops, despite clear evidence in the literature that short-term, massed formats are not likely to improve skills in this complex area. Reviews of the educational literature suggest that an optimal model of training would incorporate distributed practice techniques; repeated practice over a lengthy time period, small-group interactive learning, mentoring relationships, skills-based training and an ongoing discussion of actual patients. This study investigates the potential role of group-based training incorporating multiple aspects of good pedagogy for training doctors in basic competencies in brief cognitive behaviour therapy (BCBT). Six groups of family doctors (n = 32) completed eight 2-hour sessions of BCBT group training over a 6-month period. A baseline control design was utilised with pre- and post-training measures of doctors' BCBT skills, knowledge and engagement in BCBT treatment. Family doctors' knowledge, skills in and actual use of BCBT with patients improved significantly over the course of training compared with the control period. This research demonstrates preliminary support for the efficacy of an empirically derived group training model for family doctors. Brief CBT group-based training could prove to be an effective and viable model for future doctor training.

  13. High and low frequency unfolded partial least squares regression based on empirical mode decomposition for quantitative analysis of fuel oil samples.

    PubMed

    Bian, Xihui; Li, Shujuan; Lin, Ligang; Tan, Xiaoyao; Fan, Qingjie; Li, Ming

    2016-06-21

    Accurate prediction of the model is fundamental to the successful analysis of complex samples. To utilize abundant information embedded over frequency and time domains, a novel regression model is presented for quantitative analysis of hydrocarbon contents in the fuel oil samples. The proposed method named as high and low frequency unfolded PLSR (HLUPLSR), which integrates empirical mode decomposition (EMD) and unfolded strategy with partial least squares regression (PLSR). In the proposed method, the original signals are firstly decomposed into a finite number of intrinsic mode functions (IMFs) and a residue by EMD. Secondly, the former high frequency IMFs are summed as a high frequency matrix and the latter IMFs and residue are summed as a low frequency matrix. Finally, the two matrices are unfolded to an extended matrix in variable dimension, and then the PLSR model is built between the extended matrix and the target values. Coupled with Ultraviolet (UV) spectroscopy, HLUPLSR has been applied to determine hydrocarbon contents of light gas oil and diesel fuels samples. Comparing with single PLSR and other signal processing techniques, the proposed method shows superiority in prediction ability and better model interpretation. Therefore, HLUPLSR method provides a promising tool for quantitative analysis of complex samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Classifying Volcanic Activity Using an Empirical Decision Making Algorithm

    NASA Astrophysics Data System (ADS)

    Junek, W. N.; Jones, W. L.; Woods, M. T.

    2012-12-01

    Detection and classification of developing volcanic activity is vital to eruption forecasting. Timely information regarding an impending eruption would aid civil authorities in determining the proper response to a developing crisis. In this presentation, volcanic activity is characterized using an event tree classifier and a suite of empirical statistical models derived through logistic regression. Forecasts are reported in terms of the United States Geological Survey (USGS) volcano alert level system. The algorithm employs multidisciplinary data (e.g., seismic, GPS, InSAR) acquired by various volcano monitoring systems and source modeling information to forecast the likelihood that an eruption, with a volcanic explosivity index (VEI) > 1, will occur within a quantitatively constrained area. Logistic models are constructed from a sparse and geographically diverse dataset assembled from a collection of historic volcanic unrest episodes. Bootstrapping techniques are applied to the training data to allow for the estimation of robust logistic model coefficients. Cross validation produced a series of receiver operating characteristic (ROC) curves with areas ranging between 0.78-0.81, which indicates the algorithm has good predictive capabilities. The ROC curves also allowed for the determination of a false positive rate and optimum detection for each stage of the algorithm. Forecasts for historic volcanic unrest episodes in North America and Iceland were computed and are consistent with the actual outcome of the events.

  15. A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.

    PubMed

    Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W

    2005-01-01

    We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.

  16. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  17. Numerical analysis of the effect of the kind of activating agent and the impregnation ratio on the parameters of the microporous structure of the active carbons

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Mirosław

    2015-09-01

    The paper presents the results of the research on the application of the LBET class adsorption models with the fast multivariant identification procedure as a tool for analysing the microporous structure of the active carbons obtained by chemical activation using potassium and sodium hydroxides as an activator. The proposed technique of the fast multivariant fitting of the LBET class models to the empirical adsorption data was employed particularly to evaluate the impact of the used activator and the impregnation ratio on the obtained microporous structure of the carbonaceous adsorbents.

  18. Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case

    NASA Astrophysics Data System (ADS)

    Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann

    2017-04-01

    Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.

  19. The Impact of Collaboration, Empowerment, and Choice: An Empirical Examination of the Collaborative Course Development Method

    ERIC Educational Resources Information Center

    Aiken, K. Damon; Heinze, Timothy C.; Meuter, Matthew L.; Chapman, Kenneth J.

    2017-01-01

    This research empirically tests collaborative course development (CCD)-a pedagogy presented in the 2016 "Marketing Education Review Special Issue on Teaching Innovations". A team of researchers taught experimental courses using CCD methods (employing various techniques including syllabus building, "flex-tures," free-choice…

  20. University Students' Understanding of the Concepts Empirical, Theoretical, Qualitative and Quantitative Research

    ERIC Educational Resources Information Center

    Murtonen, Mari

    2015-01-01

    University research education in many disciplines is frequently confronted by problems with students' weak level of understanding of research concepts. A mind map technique was used to investigate how students understand central methodological concepts of empirical, theoretical, qualitative and quantitative. The main hypothesis was that some…

  1. Clusters of Colleges and Universities: An Empirically Determined System.

    ERIC Educational Resources Information Center

    Korb, Roslyn

    A technique for classifying higher education institutions was developed in order to identify homogenous subsets of institutions and to compare an institution with its empirically determined peers. The majority of the data were obtained from a 4-year longitudinal file that merged the finance, faculty, enrollment, and institutional characteristics…

  2. An Empirical Model of Titan's Magnetic Environment During the Cassini Era: Evidence for Seasonal Variability

    NASA Astrophysics Data System (ADS)

    Simon, S.; Kabanovic, S.; Meeks, Z. C.; Neubauer, F. M.

    2017-12-01

    Based on the magnetic field data collected during the Cassini era, we construct an empirical model of the ambient magnetospheric field conditions along the orbit of Saturn's largest moon Titan. Observations from Cassini's close Titan flybys as well as 191 non-targeted crossings of Titan's orbit are taken into account. For each of these events we apply the classification technique of Simon et al. (2010) to categorize the ambient magnetospheric field as current sheet, lobe-like, magnetosheath, or an admixture of these regimes. Independent of Saturnian season, Titan's magnetic environment around noon local time is dominated by the perturbed fields of Saturn's broad magnetodisk current sheet. Only observations from the nightside magnetosphere reveal a slow, but steady change of the background field from southern lobe-type to northern lobe-type on a time scale of several years. This behavior is consistent with a continuous change in the curvature of the bowl-shaped magnetodisk current sheet over the course of the Saturnian year. We determine the occurrence rate of each magnetic environment category along Titan's orbit as a function of Saturnian season and local time.

  3. Empirical transfer functions: Application to determination of outermost core velocity structure using SmKS phases

    NASA Astrophysics Data System (ADS)

    Alexandrakis, Catherine; Eaton, David W.

    2007-11-01

    SmKS waves provide good resolution of outer-core velocity structure, but are affected by heterogeneity in the D'' region. We have developed an Empirical Transfer Function (ETF) technique that transforms a reference pulse (here, SmKS) into a target waveform (SKKS) by: (1) time-windowing the respective pulses, (2) applying Wiener deconvolution, and (3) convolving the output with a Gaussian waveform. Common source and path effects are implicitly removed by this process. We combine ETFs from 446 broadband seismograms to produce a global stack, from which S3KS-SKKS differential time can be measured accurately. As a result of stacking, the scatter in our measurements (0.43 s) is much less than the 1.29 s scatter in previous compilations. Although our data do not uniquely constrain outermost core velocities, we show that the fit of most standard models can be improved by perturbing the outermost core velocity. Our best-fitting model is formed using IASP91 with PREM-like velocity at the top of the core.

  4. Hindcast of extreme sea states in North Atlantic extratropical storms

    NASA Astrophysics Data System (ADS)

    Ponce de León, Sonia; Guedes Soares, Carlos

    2015-02-01

    This study examines the variability of freak wave parameters around the eye of northern hemisphere extratropical cyclones. The data was obtained from a hindcast performed with the WAve Model (WAM) model forced by the wind fields of the Climate Forecast System Reanalysis (CFSR). The hindcast results were validated against the wave buoys and satellite altimetry data showing a good correlation. The variability of different wave parameters was assessed by applying the empirical orthogonal functions (EOF) technique on the hindcast data. From the EOF analysis, it can be concluded that the first empirical orthogonal function (V1) accounts for greater share of variability of significant wave height (Hs), peak period (Tp), directional spreading (SPR) and Benjamin-Feir index (BFI). The share of variance in V1 varies for cyclone and variable: for the 2nd storm and Hs V1 contains 96 % of variance while for the 3rd storm and BFI V1 accounts only for 26 % of variance. The spatial patterns of V1 show that the variables are distributed around the cyclones centres mainly in a lobular fashion.

  5. Improved estimation of subject-level functional connectivity using full and partial correlation with empirical Bayes shrinkage.

    PubMed

    Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A

    2018-05-15

    Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations. Copyright © 2018. Published by Elsevier Inc.

  6. Machine learning approaches for estimation of prediction interval for the model output.

    PubMed

    Shrestha, Durga L; Solomatine, Dimitri P

    2006-03-01

    A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.

  7. Recent Progress in Treating Protein-Ligand Interactions with Quantum-Mechanical Methods.

    PubMed

    Yilmazer, Nusret Duygu; Korth, Martin

    2016-05-16

    We review the first successes and failures of a "new wave" of quantum chemistry-based approaches to the treatment of protein/ligand interactions. These approaches share the use of "enhanced", dispersion (D), and/or hydrogen-bond (H) corrected density functional theory (DFT) or semi-empirical quantum mechanical (SQM) methods, in combination with ensemble weighting techniques of some form to capture entropic effects. Benchmark and model system calculations in comparison to high-level theoretical as well as experimental references have shown that both DFT-D (dispersion-corrected density functional theory) and SQM-DH (dispersion and hydrogen bond-corrected semi-empirical quantum mechanical) perform much more accurately than older DFT and SQM approaches and also standard docking methods. In addition, DFT-D might soon become and SQM-DH already is fast enough to compute a large number of binding modes of comparably large protein/ligand complexes, thus allowing for a more accurate assessment of entropic effects.

  8. Qgui: A high-throughput interface for automated setup and analysis of free energy calculations and empirical valence bond simulations in biological systems.

    PubMed

    Isaksen, Geir Villy; Andberg, Tor Arne Heim; Åqvist, Johan; Brandsdal, Bjørn Olav

    2015-07-01

    Structural information and activity data has increased rapidly for many protein targets during the last decades. In this paper, we present a high-throughput interface (Qgui) for automated free energy and empirical valence bond (EVB) calculations that use molecular dynamics (MD) simulations for conformational sampling. Applications to ligand binding using both the linear interaction energy (LIE) method and the free energy perturbation (FEP) technique are given using the estrogen receptor (ERα) as a model system. Examples of free energy profiles obtained using the EVB method for the rate-limiting step of the enzymatic reaction catalyzed by trypsin are also shown. In addition, we present calculation of high-precision Arrhenius plots to obtain the thermodynamic activation enthalpy and entropy with Qgui from running a large number of EVB simulations. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. On fitting the Pareto Levy distribution to stock market index data: Selecting a suitable cutoff value

    NASA Astrophysics Data System (ADS)

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.

    2005-08-01

    The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of stock markets indexes worldwide. The selection of the threshold parameter from empirical data and consequently, the determination of the exponent of the distribution, is often done using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter. In this work, a more objective procedure based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution is presented. The technique is illustrated for data sets from the New York Stock Exchange (DJIA) and the Mexican Stock Market (IPC).

  10. Estimating the sources and transport of nutrients in the Waikato River Basin, New Zealand

    USGS Publications Warehouse

    Alexander, Richard B.; Elliott, Alexander H.; Shankar, Ude; McBride, Graham B.

    2002-01-01

    We calibrated SPARROW (Spatially Referenced Regression on Watershed Attributes) surface water‐quality models using measurements of total nitrogen and total phosphorus from 37 sites in the 13,900‐km2 Waikato River Basin, the largest watershed on the North Island of New Zealand. This first application of SPARROW outside of the United States included watersheds representative of a wide range of natural and cultural conditions and water‐resources data that were well suited for calibrating and validating the models. We applied the spatially distributed model to a drainage network of nearly 5000 stream reaches and 75 lakes and reservoirs to empirically estimate the rates of nutrient delivery (and their levels of uncertainty) from point and diffuse sources to streams, lakes, and watershed outlets. The resulting models displayed relatively small errors; predictions of stream yield (kg ha−1 yr−1) were typically within 30% or less of the observed values at the monitoring sites. There was strong evidence of the accuracy of the model estimates of nutrient sources and the natural rates of nutrient attenuation in surface waters. Estimated loss rates for streams, lakes, and reservoirs agreed closely with experimental measurements and empirical models from New Zealand, North America, and Europe as well as with previous U.S. SPARROW models. The results indicate that the SPARROW modeling technique provides a reliable method for relating experimental data and observations from small catchments to the transport of nutrients in the surface waters of large river basins.

  11. Pre- and Post-equinox ROSINA production rates calculated using a realistic empirical coma model derived from AMPS-DSMC simulations of comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Hansen, Kenneth; Altwegg, Kathrin; Berthelier, Jean-Jacques; Bieler, Andre; Calmonte, Ursina; Combi, Michael; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, Tamas; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Lena; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu

    2016-04-01

    We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet coma (<400 km) of comet 67P for the pre-equinox orbit of comet 67P/Churyumov-Gerasimenko. In this work we extend the empirical model to the post-equinox, post-perihelion time period. In addition, we extend the coma model to significantly further from the comet (~100,000-1,000,000 km). The empirical model characterizes the neutral coma in a comet centered, sun fixed reference frame as a function of heliocentric distance, radial distance from the comet, local time and declination. Furthermore, we have generalized the model beyond application to 67P by replacing the heliocentric distance parameterizations and mapping them to production rates. Using this method, the model become significantly more general and can be applied to any comet. The model is a significant improvement over simpler empirical models, such as the Haser model. For 67P, the DSMC results are, of course, a more accurate representation of the coma at any given time, but the advantage of a mean state, empirical model is the ease and speed of use. One application of the empirical model is to de-trend the spacecraft motion from the ROSINA COPS and DFMS data (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Comet Pressure Sensor, Double Focusing Mass Spectrometer). The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on the single point measurement. In this presentation we will present the coma production rate as a function of heliocentric distance both pre- and post-equinox and perihelion.

  12. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    NASA Astrophysics Data System (ADS)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream measurements.

  13. A Comparison of Combustor-Noise Models

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.

    2012-01-01

    The present status of combustor-noise prediction in the NASA Aircraft Noise Prediction Program (ANOPP)1 for current-generation (N) turbofan engines is summarized. Several semi-empirical models for turbofan combustor noise are discussed, including best methods for near-term updates to ANOPP. An alternate turbine-transmission factor2 will appear as a user selectable option in the combustor-noise module GECOR in the next release. The three-spectrum model proposed by Stone et al.3 for GE turbofan-engine combustor noise is discussed and compared with ANOPP predictions for several relevant cases. Based on the results presented herein and in their report,3 it is recommended that the application of this fully empirical combustor-noise prediction method be limited to situations involving only General-Electric turbofan engines. Long-term needs and challenges for the N+1 through N+3 time frame are discussed. Because the impact of other propulsion-noise sources continues to be reduced due to turbofan design trends, advances in noise-mitigation techniques, and expected aircraft configuration changes, the relative importance of core noise is expected to greatly increase in the future. The noise-source structure in the combustor, including the indirect one, and the effects of the propagation path through the engine and exhaust nozzle need to be better understood. In particular, the acoustic consequences of the expected trends toward smaller, highly efficient gas-generator cores and low-emission fuel-flexible combustors need to be fully investigated since future designs are quite likely to fall outside of the parameter space of existing (semi-empirical) prediction tools.

  14. Empirical models for the prediction of ground motion duration for intraplate earthquakes

    NASA Astrophysics Data System (ADS)

    Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.

    2017-07-01

    Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the existing relationships.

  15. A Comparison of Full and Empirical Bayes Techniques for Inferring Sea Level Changes from Tide Gauge Records

    NASA Astrophysics Data System (ADS)

    Piecuch, C. G.; Huybers, P. J.; Tingley, M.

    2016-12-01

    Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.

  16. MIMO model of an interacting series process for Robust MPC via System Identification.

    PubMed

    Wibowo, Tri Chandra S; Saad, Nordin

    2010-07-01

    This paper discusses the empirical modeling using system identification technique with a focus on an interacting series process. The study is carried out experimentally using a gaseous pilot plant as the process, in which the dynamic of such a plant exhibits the typical dynamic of an interacting series process. Three practical approaches are investigated and their performances are evaluated. The models developed are also examined in real-time implementation of a linear model predictive control. The selected model is able to reproduce the main dynamic characteristics of the plant in open-loop and produces zero steady-state errors in closed-loop control system. Several issues concerning the identification process and the construction of a MIMO state space model for a series interacting process are deliberated. 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Learning temporal rules to forecast instability in continuously monitored patients.

    PubMed

    Guillame-Bert, Mathieu; Dubrawski, Artur; Wang, Donghan; Hravnak, Marilyn; Clermont, Gilles; Pinsky, Michael R

    2017-01-01

    Inductive machine learning, and in particular extraction of association rules from data, has been successfully used in multiple application domains, such as market basket analysis, disease prognosis, fraud detection, and protein sequencing. The appeal of rule extraction techniques stems from their ability to handle intricate problems yet produce models based on rules that can be comprehended by humans, and are therefore more transparent. Human comprehension is a factor that may improve adoption and use of data-driven decision support systems clinically via face validity. In this work, we explore whether we can reliably and informatively forecast cardiorespiratory instability (CRI) in step-down unit (SDU) patients utilizing data from continuous monitoring of physiologic vital sign (VS) measurements. We use a temporal association rule extraction technique in conjunction with a rule fusion protocol to learn how to forecast CRI in continuously monitored patients. We detail our approach and present and discuss encouraging empirical results obtained using continuous multivariate VS data from the bedside monitors of 297 SDU patients spanning 29 346 hours (3.35 patient-years) of observation. We present example rules that have been learned from data to illustrate potential benefits of comprehensibility of the extracted models, and we analyze the empirical utility of each VS as a potential leading indicator of an impending CRI event. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Thermographic Imaging of the Space Shuttle During Re-Entry Using a Near Infrared Sensor

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Horvath, Thomas J.; Kerns, Robbie V.; Burke, Eric R.; Taylor, Jeff C.; Spisz, Tom; Gibson, David M.; Shea, Edward J.; Mercer, C. David; Schwartz, Richard J.; hide

    2012-01-01

    High resolution calibrated near infrared (NIR) imagery of the Space Shuttle Orbiter was obtained during hypervelocity atmospheric re-entry of the STS-119, STS-125, STS-128, STS-131, STS-132, STS-133, and STS-134 missions. This data has provided information on the distribution of surface temperature and the state of the airflow over the windward surface of the Orbiter during descent. The thermal imagery complemented data collected with onboard surface thermocouple instrumentation. The spatially resolved global thermal measurements made during the Orbiter s hypersonic re-entry will provide critical flight data for reducing the uncertainty associated with present day ground-to-flight extrapolation techniques and current state-of-the-art empirical boundary-layer transition or turbulent heating prediction methods. Laminar and turbulent flight data is critical for the validation of physics-based, semi-empirical boundary-layer transition prediction methods as well as stimulating the validation of laminar numerical chemistry models and the development of turbulence models supporting NASA s next-generation spacecraft. In this paper we provide details of the NIR imaging system used on both air and land-based imaging assets. The paper will discuss calibrations performed on the NIR imaging systems that permitted conversion of captured radiant intensity (counts) to temperature values. Image processing techniques are presented to analyze the NIR data for vignetting distortion, best resolution, and image sharpness. Keywords: HYTHIRM, Space Shuttle thermography, hypersonic imaging, near infrared imaging, histogram analysis, singular value decomposition, eigenvalue image sharpness

  19. Identification of sudden stiffness changes in the acceleration response of a bridge to moving loads using ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Aied, H.; González, A.; Cantero, D.

    2016-01-01

    The growth of heavy traffic together with aggressive environmental loads poses a threat to the safety of an aging bridge stock. Often, damage is only detected via visual inspection at a point when repairing costs can be quite significant. Ideally, bridge managers would want to identify a stiffness change as soon as possible, i.e., as it is occurring, to plan for prompt measures before reaching a prohibitive cost. Recent developments in signal processing techniques such as wavelet analysis and empirical mode decomposition (EMD) have aimed to address this need by identifying a stiffness change from a localised feature in the structural response to traffic. However, the effectiveness of these techniques is limited by the roughness of the road profile, the vehicle speed and the noise level. In this paper, ensemble empirical mode decomposition (EEMD) is applied by the first time to the acceleration response of a bridge model to a moving load with the purpose of capturing sudden stiffness changes. EEMD is more adaptive and appears to be better suited to non-linear signals than wavelets, and it reduces the mode mixing problem present in EMD. EEMD is tested in a variety of theoretical 3D vehicle-bridge interaction scenarios. Stiffness changes are successfully identified, even for small affected regions, relatively poor profiles, high vehicle speeds and significant noise. The latter is due to the ability of EEMD to separate high frequency components associated to sudden stiffness changes from other frequency components associated to the vehicle-bridge interaction system.

  20. Comparison of analysis and flight test data for a drone aircraft with active flutter suppression

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Pototzky, A. S.

    1981-01-01

    A drone aircraft equipped with an active flutter suppression system is considered with emphasis on the comparison of modal dampings and frequencies as a function of Mach number. Results are presented for both symmetric and antisymmetric motion with flutter suppression off. Only symmetric results are given for flutter suppression on. Frequency response functions of the vehicle are presented from both flight test data and analysis. The analysis correlation is improved by using an empirical aerodynamic correction factor which is proportional to the ratio of experimental to analytical steady-state lift curve slope. The mathematical models are included and existing analytical techniques are described as well as an alternative analytical technique for obtaining closed-loop results.

  1. Reconstruction of the erythemal UV radiation data in Novi Sad (Serbia) using the NEOPLANTA parametric model

    NASA Astrophysics Data System (ADS)

    Malinovic-Milicevic, S.; Mihailovic, D. T.; Radovanovic, M. M.

    2015-07-01

    This paper focuses on the development and application of a technique for filling the daily erythemal UV dose data gaps and the reconstruction of the past daily erythemal UV doses in Novi Sad, Serbia. The technique implies developing the empirical equation for estimation of daily erythemal UV doses by means of relative daily sunshine duration under all sky conditions. A good agreement was found between modeled and measured values of erythemal UV doses. This technique was used for filling the short gaps in the erythemal UV dose measurement series (2003-2009) as well as for the reconstruction of the past time-series values (1981-2002). Statistically significant positive erythemal UV dose trend of 6.9 J m-2 per year was found during the period 1981-2009. In relation to the reference period 1981-1989, an increase in the erythemal UV dose of 6.92 % is visible in the period 1990-1999 and the increase of 9.67 % can be seen in the period 2000-2009. The strongest increase in erythemal UV doses has been found for winter and spring seasons.

  2. Improved determination of vector lithospheric magnetic anomalies from MAGSAT data

    NASA Technical Reports Server (NTRS)

    Ravat, Dhananjay

    1993-01-01

    Scientific contributions made in developing new methods to isolate and map vector magnetic anomalies from measurements made by Magsat are described. In addition to the objective of the proposal, the isolation and mapping of equatorial vector lithospheric Magsat anomalies, isolation of polar ionospheric fields during the period were also studied. Significant progress was also made in isolation of polar delta(Z) component and scalar anomalies as well as integration and synthesis of various techniques of removing equatorial and polar ionospheric effects. The significant contributions of this research are: (1) development of empirical/analytical techniques in modeling ionospheric fields in Magsat data and their removal from uncorrected anomalies to obtain better estimates of lithospheric anomalies (this task was accomplished for equatorial delta(X), delta(Z), and delta(B) component and polar delta(Z) and delta(B) component measurements; (2) integration of important processing techniques developed during the last decade with the newly developed technologies of ionospheric field modeling into an optimum processing scheme; and (3) implementation of the above processing scheme to map the most robust magnetic anomalies of the lithosphere (components as well as scalar).

  3. Mammalian cell culture process for monoclonal antibody production: nonlinear modelling and parameter estimation.

    PubMed

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad; Roman, Monica

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.

  4. Mammalian Cell Culture Process for Monoclonal Antibody Production: Nonlinear Modelling and Parameter Estimation

    PubMed Central

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies. PMID:25685797

  5. High Technology Service Value Maximization through an MCDM-Based Innovative e-Business Model

    NASA Astrophysics Data System (ADS)

    Huang, Chi-Yo; Tzeng, Gwo-Hshiung; Ho, Wen-Rong; Chuang, Hsiu-Tyan; Lue, Yeou-Feng

    The emergence of the Internet has changed the high technology marketing channels thoroughly in the past decade while E-commerce has already become one of the most efficient channels which high technology firms may skip the intermediaries and reach end customers directly. However, defining appropriate e-business models for commercializing new high technology products or services through Internet are not that easy. To overcome the above mentioned problems, a novel analytic framework based on the concept of high technology customers’ competence set expansion by leveraging high technology service firms’ capabilities and resources as well as novel multiple criteria decision making (MCDM) techniques, will be proposed in order to define an appropriate e-business model. An empirical example study of a silicon intellectual property (SIP) commercialization e-business model based on MCDM techniques will be provided for verifying the effectiveness of this novel analytic framework. The analysis successful assisted a Taiwanese IC design service firm to define an e-business model for maximizing its customer’s SIP transactions. In the future, the novel MCDM framework can be applied successful to novel business model definitions in the high technology industry.

  6. Stopping Distances: An Excellent Example of Empirical Modelling.

    ERIC Educational Resources Information Center

    Lawson, D. A.; Tabor, J. H.

    2001-01-01

    Explores the derivation of empirical models for the stopping distance of a car being driven at a range of speeds. Indicates that the calculation of stopping distances makes an excellent example of empirical modeling because it is a situation that is readily understood and particularly relevant to many first-year undergraduates who are learning or…

  7. Does Gene Tree Discordance Explain the Mismatch between Macroevolutionary Models and Empirical Patterns of Tree Shape and Branching Times?

    PubMed Central

    Stadler, Tanja; Degnan, James H.; Rosenberg, Noah A.

    2016-01-01

    Classic null models for speciation and extinction give rise to phylogenies that differ in distribution from empirical phylogenies. In particular, empirical phylogenies are less balanced and have branching times closer to the root compared to phylogenies predicted by common null models. This difference might be due to null models of the speciation and extinction process being too simplistic, or due to the empirical datasets not being representative of random phylogenies. A third possibility arises because phylogenetic reconstruction methods often infer gene trees rather than species trees, producing an incongruity between models that predict species tree patterns and empirical analyses that consider gene trees. We investigate the extent to which the difference between gene trees and species trees under a combined birth–death and multispecies coalescent model can explain the difference in empirical trees and birth–death species trees. We simulate gene trees embedded in simulated species trees and investigate their difference with respect to tree balance and branching times. We observe that the gene trees are less balanced and typically have branching times closer to the root than the species trees. Empirical trees from TreeBase are also less balanced than our simulated species trees, and model gene trees can explain an imbalance increase of up to 8% compared to species trees. However, we see a much larger imbalance increase in empirical trees, about 100%, meaning that additional features must also be causing imbalance in empirical trees. This simulation study highlights the necessity of revisiting the assumptions made in phylogenetic analyses, as these assumptions, such as equating the gene tree with the species tree, might lead to a biased conclusion. PMID:26968785

  8. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2014-11-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  9. Controls of channel morphology and sediment concentration on flow resistance in a large sand-bed river: A case study of the lower Yellow River

    NASA Astrophysics Data System (ADS)

    Ma, Yuanxu; Huang, He Qing

    2016-07-01

    Accurate estimation of flow resistance is crucial for flood routing, flow discharge and velocity estimation, and engineering design. Various empirical and semiempirical flow resistance models have been developed during the past century; however, a universal flow resistance model for varying types of rivers has remained difficult to be achieved to date. In this study, hydrometric data sets from six stations in the lower Yellow River during 1958-1959 are used to calibrate three empirical flow resistance models (Eqs. (5)-(7)) and evaluate their predictability. A group of statistical measures have been used to evaluate the goodness of fit of these models, including root mean square error (RMSE), coefficient of determination (CD), the Nash coefficient (NA), mean relative error (MRE), mean symmetry error (MSE), percentage of data with a relative error ≤ 50% and 25% (P50, P25), and percentage of data with overestimated error (POE). Three model selection criterions are also employed to assess the model predictability: Akaike information criterion (AIC), Bayesian information criterion (BIC), and a modified model selection criterion (MSC). The results show that mean flow depth (d) and water surface slope (S) can only explain a small proportion of variance in flow resistance. When channel width (w) and suspended sediment concentration (SSC) are involved, the new model (7) achieves a better performance than the previous ones. The MRE of model (7) is generally < 20%, which is apparently better than that reported by previous studies. This model is validated using the data sets from the corresponding stations during 1965-1966, and the results show larger uncertainties than the calibrating model. This probably resulted from the temporal shift of dominant controls caused by channel change resulting from varying flow regime. With the advancements of earth observation techniques, information about channel width, mean flow depth, and suspended sediment concentration can be effectively extracted from multisource satellite images. We expect that the empirical methods developed in this study can be used as an effective surrogate in estimation of flow resistance in the large sand-bed rivers like the lower Yellow River.

  10. An experimental and theoretical study to relate uncommon rock/fluid properties to oil recovery. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, R.

    Waterflooding is the most commonly used secondary oil recovery technique. One of the requirements for understanding waterflood performance is a good knowledge of the basic properties of the reservoir rocks. This study is aimed at correlating rock-pore characteristics to oil recovery from various reservoir rock types and incorporating these properties into empirical models for Predicting oil recovery. For that reason, this report deals with the analyses and interpretation of experimental data collected from core floods and correlated against measurements of absolute permeability, porosity. wettability index, mercury porosimetry properties and irreducible water saturation. The results of the radial-core the radial-core andmore » linear-core flow investigations and the other associated experimental analyses are presented and incorporated into empirical models to improve the predictions of oil recovery resulting from waterflooding, for sandstone and limestone reservoirs. For the radial-core case, the standardized regression model selected, based on a subset of the variables, predicted oil recovery by waterflooding with a standard deviation of 7%. For the linear-core case, separate models are developed using common, uncommon and combination of both types of rock properties. It was observed that residual oil saturation and oil recovery are better predicted with the inclusion of both common and uncommon rock/fluid properties into the predictive models.« less

  11. Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach

    NASA Astrophysics Data System (ADS)

    Denolle, M.; Van Houtte, C.

    2017-12-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.

  12. Continuing Development of a Hybrid Model (VSH) of the Neutral Thermosphere

    NASA Technical Reports Server (NTRS)

    Burns, Alan

    1996-01-01

    We propose to continue the development of a new operational model of neutral thermospheric density, composition, temperatures and winds to improve current engineering environment definitions of the neutral thermosphere. This model will be based on simulations made with the National Center for Atmospheric Research (NCAR) Thermosphere-Ionosphere- Electrodynamic General Circulation Model (TIEGCM) and on empirical data. It will be capable of using real-time geophysical indices or data from ground-based and satellite inputs and provides neutral variables at specified locations and times. This "hybrid" model will be based on a Vector Spherical Harmonic (VSH) analysis technique developed (over the last 8 years) at the University of Michigan that permits the incorporation of the TIGCM outputs and data into the model. The VSH model will be a more accurate version of existing models of the neutral thermospheric, and will thus improve density specification for satellites flying in low Earth orbit (LEO).

  13. Uncertainty in tsunami sediment transport modeling

    USGS Publications Warehouse

    Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.

    2016-01-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.

  14. Measuring the coupled risks: A copula-based CVaR model

    NASA Astrophysics Data System (ADS)

    He, Xubiao; Gong, Pu

    2009-01-01

    Integrated risk management for financial institutions requires an approach for aggregating risk types (such as market and credit) whose distributional shapes vary considerably. The financial institutions often ignore risks' coupling influence so as to underestimate the financial risks. We constructed a copula-based Conditional Value-at-Risk (CVaR) model for market and credit risks. This technique allows us to incorporate realistic marginal distributions that capture essential empirical features of these risks, such as skewness and fat-tails while allowing for a rich dependence structure. Finally, the numerical simulation method is used to implement the model. Our results indicate that the coupled risks for the listed company's stock maybe are undervalued if credit risk is ignored, especially for the listed company with bad credit quality.

  15. An empirical model of ionospheric total electron content (TEC) near the crest of the equatorial ionization anomaly (EIA)

    NASA Astrophysics Data System (ADS)

    Hajra, Rajkumar; Chakraborty, Shyamal Kumar; Tsurutani, Bruce T.; DasGupta, Ashish; Echer, Ezequiel; Brum, Christiano G. M.; Gonzalez, Walter D.; Sobral, José Humberto Andrade

    2016-07-01

    We present a geomagnetic quiet time (Dst > -50 nT) empirical model of ionospheric total electron content (TEC) for the northern equatorial ionization anomaly (EIA) crest over Calcutta, India. The model is based on the 1980-1990 TEC measurements from the geostationary Engineering Test Satellite-2 (ETS-2) at the Haringhata (University of Calcutta, India: 22.58° N, 88.38° E geographic; 12.09° N, 160.46° E geomagnetic) ionospheric field station using the technique of Faraday rotation of plane polarized VHF (136.11 MHz) signals. The ground station is situated virtually underneath the northern EIA crest. The monthly mean TEC increases linearly with F10.7 solar ionizing flux, with a significantly high correlation coefficient (r = 0.89-0.99) between the two. For the same solar flux level, the TEC values are found to be significantly different between the descending and ascending phases of the solar cycle. This ionospheric hysteresis effect depends on the local time as well as on the solar flux level. On an annual scale, TEC exhibits semiannual variations with maximum TEC values occurring during the two equinoxes and minimum at summer solstice. The semiannual variation is strongest during local noon with a summer-to-equinox variability of ~50-100 TEC units. The diurnal pattern of TEC is characterized by a pre-sunrise (0400-0500 LT) minimum and near-noon (1300-1400 LT) maximum. Equatorial electrodynamics is dominated by the equatorial electrojet which in turn controls the daytime TEC variation and its maximum. We combine these long-term analyses to develop an empirical model of monthly mean TEC. The model is validated using both ETS-2 measurements and recent GNSS measurements. It is found that the present model efficiently estimates the TEC values within a 1-σ range from the observed mean values.

  16. Enhancing understanding and improving prediction of severe weather through spatiotemporal relational learning.

    PubMed

    McGovern, Amy; Gagne, David J; Williams, John K; Brown, Rodger A; Basara, Jeffrey B

    Severe weather, including tornadoes, thunderstorms, wind, and hail annually cause significant loss of life and property. We are developing spatiotemporal machine learning techniques that will enable meteorologists to improve the prediction of these events by improving their understanding of the fundamental causes of the phenomena and by building skillful empirical predictive models. In this paper, we present significant enhancements of our Spatiotemporal Relational Probability Trees that enable autonomous discovery of spatiotemporal relationships as well as learning with arbitrary shapes. We focus our evaluation on two real-world case studies using our technique: predicting tornadoes in Oklahoma and predicting aircraft turbulence in the United States. We also discuss how to evaluate success for a machine learning algorithm in the severe weather domain, which will enable new methods such as ours to transfer from research to operations, provide a set of lessons learned for embedded machine learning applications, and discuss how to field our technique.

  17. Instruction-level performance modeling and characterization of multimedia applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Cameron, K.W.

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based onmore » microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.« less

  18. Progress Implementing a Model-Based Iterative Reconstruction Algorithm for Ultrasound Imaging of Thick Concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Johnson, Christi R; Clayton, Dwight A

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thickmore » concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.« less

  19. Progress implementing a model-based iterative reconstruction algorithm for ultrasound imaging of thick concrete

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2017-02-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  20. New Elements To Consider When Modeling the Hazards Associated with Botulinum Neurotoxin in Food.

    PubMed

    Ihekwaba, Adaoha E C; Mura, Ivan; Malakar, Pradeep K; Walshaw, John; Peck, Michael W; Barker, G C

    2016-01-15

    Botulinum neurotoxins (BoNTs) produced by the anaerobic bacterium Clostridium botulinum are the most potent biological substances known to mankind. BoNTs are the agents responsible for botulism, a rare condition affecting the neuromuscular junction and causing a spectrum of diseases ranging from mild cranial nerve palsies to acute respiratory failure and death. BoNTs are a potential biowarfare threat and a public health hazard, since outbreaks of foodborne botulism are caused by the ingestion of preformed BoNTs in food. Currently, mathematical models relating to the hazards associated with C. botulinum, which are largely empirical, make major contributions to botulinum risk assessment. Evaluated using statistical techniques, these models simulate the response of the bacterium to environmental conditions. Though empirical models have been successfully incorporated into risk assessments to support food safety decision making, this process includes significant uncertainties so that relevant decision making is frequently conservative and inflexible. Progression involves encoding into the models cellular processes at a molecular level, especially the details of the genetic and molecular machinery. This addition drives the connection between biological mechanisms and botulism risk assessment and hazard management strategies. This review brings together elements currently described in the literature that will be useful in building quantitative models of C. botulinum neurotoxin production. Subsequently, it outlines how the established form of modeling could be extended to include these new elements. Ultimately, this can offer further contributions to risk assessments to support food safety decision making. Copyright © 2015 Ihekwaba et al.

  1. Daily air quality index forecasting with hybrid models: A case in China.

    PubMed

    Zhu, Suling; Lian, Xiuyuan; Liu, Haixia; Hu, Jianming; Wang, Yuanyuan; Che, Jinxing

    2017-12-01

    Air quality is closely related to quality of life. Air pollution forecasting plays a vital role in air pollution warnings and controlling. However, it is difficult to attain accurate forecasts for air pollution indexes because the original data are non-stationary and chaotic. The existing forecasting methods, such as multiple linear models, autoregressive integrated moving average (ARIMA) and support vector regression (SVR), cannot fully capture the information from series of pollution indexes. Therefore, new effective techniques need to be proposed to forecast air pollution indexes. The main purpose of this research is to develop effective forecasting models for regional air quality indexes (AQI) to address the problems above and enhance forecasting accuracy. Therefore, two hybrid models (EMD-SVR-Hybrid and EMD-IMFs-Hybrid) are proposed to forecast AQI data. The main steps of the EMD-SVR-Hybrid model are as follows: the data preprocessing technique EMD (empirical mode decomposition) is utilized to sift the original AQI data to obtain one group of smoother IMFs (intrinsic mode functions) and a noise series, where the IMFs contain the important information (level, fluctuations and others) from the original AQI series. LS-SVR is applied to forecast the sum of the IMFs, and then, S-ARIMA (seasonal ARIMA) is employed to forecast the residual sequence of LS-SVR. In addition, EMD-IMFs-Hybrid first separately forecasts the IMFs via statistical models and sums the forecasting results of the IMFs as EMD-IMFs. Then, S-ARIMA is employed to forecast the residuals of EMD-IMFs. To certify the proposed hybrid model, AQI data from June 2014 to August 2015 collected from Xingtai in China are utilized as a test case to investigate the empirical research. In terms of some of the forecasting assessment measures, the AQI forecasting results of Xingtai show that the two proposed hybrid models are superior to ARIMA, SVR, GRNN, EMD-GRNN, Wavelet-GRNN and Wavelet-SVR. Therefore, the proposed hybrid models can be used as effective and simple tools for air pollution forecasting and warning as well as for management. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    USGS Publications Warehouse

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  3. Resistance Management Techniques of Milton H. Erickson, M.D.: An Application to Nonhypnotic Mental Health Counseling.

    ERIC Educational Resources Information Center

    Otani, Akira

    1989-01-01

    Delineates five selected hypnotically based techniques of client resistance management pioneered by Milton H. Erickson: acceptance; paradoxical encouragement; reframing; displacement; dissociation. Explains how techniques can be applied to nonhypnotic mental health counseling. Discusses relevant clinical, theoretical, and empirical issues related…

  4. Tomography Reconstruction of Ionospheric Electron Density with Empirical Orthonormal Functions Using Korea GNSS Network

    NASA Astrophysics Data System (ADS)

    Hong, Junseok; Kim, Yong Ha; Chung, Jong-Kyun; Ssessanga, Nicholas; Kwak, Young-Sil

    2017-03-01

    In South Korea, there are about 80 Global Positioning System (GPS) monitoring stations providing total electron content (TEC) every 10 min, which can be accessed through Korea Astronomy and Space Science Institute (KASI) for scientific use. We applied the computerized ionospheric tomography (CIT) algorithm to the TEC dataset from this GPS network for monitoring the regional ionosphere over South Korea. The algorithm utilizes multiplicative algebraic reconstruction technique (MART) with an initial condition of the latest International Reference Ionosphere-2016 model (IRI-2016). In order to reduce the number of unknown variables, the vertical profiles of electron density are expressed with a linear combination of empirical orthonormal functions (EOFs) that were derived from the IRI empirical profiles. Although the number of receiver sites is much smaller than that of Japan, the CIT algorithm yielded reasonable structure of the ionosphere over South Korea. We verified the CIT results with NmF2 from ionosondes in Icheon and Jeju and also with GPS TEC at the center of South Korea. In addition, the total time required for CIT calculation was only about 5 min, enabling the exploration of the vertical ionospheric structure in near real time.

  5. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization

    PubMed Central

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods. PMID:28222194

  6. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization.

    PubMed

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.

  7. Tracking Expected Improvements of Decadal Prediction in Climate Services

    NASA Astrophysics Data System (ADS)

    Suckling, E.; Thompson, E.; Smith, L. A.

    2013-12-01

    Physics-based simulation models are ultimately expected to provide the best available (decision-relevant) probabilistic climate predictions, as they can capture the dynamics of the Earth System across a range of situations, situations for which observations for the construction of empirical models are scant if not nonexistent. This fact in itself provides neither evidence that predictions from today's Earth Systems Models will outperform today's empirical models, nor a guide to the space and time scales on which today's model predictions are adequate for a given purpose. Empirical (data-based) models are employed to make probability forecasts on decadal timescales. The skill of these forecasts is contrasted with that of state-of-the-art climate models, and the challenges faced by each approach are discussed. The focus is on providing decision-relevant probability forecasts for decision support. An empirical model, known as Dynamic Climatology is shown to be competitive with CMIP5 climate models on decadal scale probability forecasts. Contrasting the skill of simulation models not only with each other but also with empirical models can reveal the space and time scales on which a generation of simulation models exploits their physical basis effectively. It can also quantify their ability to add information in the formation of operational forecasts. Difficulties (i) of information contamination (ii) of the interpretation of probabilistic skill and (iii) of artificial skill complicate each modelling approach, and are discussed. "Physics free" empirical models provide fixed, quantitative benchmarks for the evaluation of ever more complex climate models, that is not available from (inter)comparisons restricted to only complex models. At present, empirical models can also provide a background term for blending in the formation of probability forecasts from ensembles of simulation models. In weather forecasting this role is filled by the climatological distribution, and can significantly enhance the value of longer lead-time weather forecasts to those who use them. It is suggested that the direct comparison of simulation models with empirical models become a regular component of large model forecast intercomparison and evaluation. This would clarify the extent to which a given generation of state-of-the-art simulation models provide information beyond that available from simpler empirical models. It would also clarify current limitations in using simulation forecasting for decision support. No model-based probability forecast is complete without a quantitative estimate if its own irrelevance; this estimate is likely to increase as a function of lead time. A lack of decision-relevant quantitative skill would not bring the science-based foundation of anthropogenic warming into doubt. Similar levels of skill with empirical models does suggest a clear quantification of limits, as a function of lead time, for spatial and temporal scales on which decisions based on such model output are expected to prove maladaptive. Failing to clearly state such weaknesses of a given generation of simulation models, while clearly stating their strength and their foundation, risks the credibility of science in support of policy in the long term.

  8. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  9. Support vector regression to predict porosity and permeability: Effect of sample size

    NASA Astrophysics Data System (ADS)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.

  10. Comparison of the Various Methodologies Used in Studying Runoff and Sediment Load in the Yellow River Basin

    NASA Astrophysics Data System (ADS)

    Xu, M., III; Liu, X.

    2017-12-01

    In the past 60 years, both the runoff and sediment load in the Yellow River Basin showed significant decreasing trends owing to the influences of human activities and climate change. Quantifying the impact of each factor (e.g. precipitation, sediment trapping dams, pasture, terrace, etc.) on the runoff and sediment load is among the key issues to guide the implement of water and soil conservation measures, and to predict the variation trends in the future. Hundreds of methods have been developed for studying the runoff and sediment load in the Yellow River Basin. Generally, these methods can be classified into empirical methods and physical-based models. The empirical methods, including hydrological method, soil and water conservation method, etc., are widely used in the Yellow River management engineering. These methods generally apply the statistical analyses like the regression analysis to build the empirical relationships between the main characteristic variables in a river basin. The elasticity method extensively used in the hydrological research can be classified into empirical method as it is mathematically deduced to be equivalent with the hydrological method. Physical-based models mainly include conceptual models and distributed models. The conceptual models are usually lumped models (e.g. SYMHD model, etc.) and can be regarded as transition of empirical models and distributed models. Seen from the publications that less studies have been conducted applying distributed models than empirical models as the simulation results of runoff and sediment load based on distributed models (e.g. the Digital Yellow Integrated Model, the Geomorphology-Based Hydrological Model, etc.) were usually not so satisfied owing to the intensive human activities in the Yellow River Basin. Therefore, this study primarily summarizes the empirical models applied in the Yellow River Basin and theoretically analyzes the main causes for the significantly different results using different empirical researching methods. Besides, we put forward an assessment frame for the researching methods of the runoff and sediment load variations in the Yellow River Basin from the point of view of inputting data, model structure and result output. And the assessment frame was then applied in the Huangfuchuan River.

  11. An empirical comparison of a dynamic software testability metric to static cyclomatic complexity

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.; Miller, Keith W.; Payne, Jeffrey E.

    1993-01-01

    This paper compares the dynamic testability prediction technique termed 'sensitivity analysis' to the static testability technique termed cyclomatic complexity. The application that we chose in this empirical study is a CASE generated version of a B-737 autoland system. For the B-737 system we analyzed, we isolated those functions that we predict are more prone to hide errors during system/reliability testing. We also analyzed the code with several other well-known static metrics. This paper compares and contrasts the results of sensitivity analysis to the results of the static metrics.

  12. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    PubMed

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.

  13. FITTING NONLINEAR ORDINARY DIFFERENTIAL EQUATION MODELS WITH RANDOM EFFECTS AND UNKNOWN INITIAL CONDITIONS USING THE STOCHASTIC APPROXIMATION EXPECTATION–MAXIMIZATION (SAEM) ALGORITHM

    PubMed Central

    Chow, Sy- Miin; Lu, Zhaohua; Zhu, Hongtu; Sherwood, Andrew

    2014-01-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation–maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed. PMID:25416456

  14. Accuracy test for link prediction in terms of similarity index: The case of WS and BA models

    NASA Astrophysics Data System (ADS)

    Ahn, Min-Woo; Jung, Woo-Sung

    2015-07-01

    Link prediction is a technique that uses the topological information in a given network to infer the missing links in it. Since past research on link prediction has primarily focused on enhancing performance for given empirical systems, negligible attention has been devoted to link prediction with regard to network models. In this paper, we thus apply link prediction to two network models: The Watts-Strogatz (WS) model and Barabási-Albert (BA) model. We attempt to gain a better understanding of the relation between accuracy and each network parameter (mean degree, the number of nodes and the rewiring probability in the WS model) through network models. Six similarity indices are used, with precision and area under the ROC curve (AUC) value as the accuracy metrics. We observe a positive correlation between mean degree and accuracy, and size independence of the AUC value.

  15. A discrete control model of PLANT

    NASA Technical Reports Server (NTRS)

    Mitchell, C. M.

    1985-01-01

    A model of the PLANT system using the discrete control modeling techniques developed by Miller is described. Discrete control models attempt to represent in a mathematical form how a human operator might decompose a complex system into simpler parts and how the control actions and system configuration are coordinated so that acceptable overall system performance is achieved. Basic questions include knowledge representation, information flow, and decision making in complex systems. The structure of the model is a general hierarchical/heterarchical scheme which structurally accounts for coordination and dynamic focus of attention. Mathematically, the discrete control model is defined in terms of a network of finite state systems. Specifically, the discrete control model accounts for how specific control actions are selected from information about the controlled system, the environment, and the context of the situation. The objective is to provide a plausible and empirically testable accounting and, if possible, explanation of control behavior.

  16. Modeling transport kinetics in clinoptilolite-phosphate rock systems

    NASA Technical Reports Server (NTRS)

    Allen, E. R.; Ming, D. W.; Hossner, L. R.; Henninger, D. L.

    1995-01-01

    Nutrient release in clinoptilolite-phosphate rock (Cp-PR) systems occurs through dissolution and cation-exchange reactions. Investigating the kinetics of these reactions expands our understanding of nutrient release processes. Research was conducted to model transport kinetics of nutrient release in Cp-PR systems. The objectives were to identify empirical models that best describe NH4, K, and P release and define diffusion-controlling processes. Materials included a Texas clinoptilolite (Cp) and North Carolina phosphate rock (PR). A continuous-flow thin-disk technique was used. Models evaluated included zero order, first order, second order, parabolic diffusion, simplified Elovich, Elovich, and power function. The power-function, Elovich, and parabolic-diffusion models adequately described NH4, K, and P release. The power-function model was preferred because of its simplicity. Models indicated nutrient release was diffusion controlled. Primary transport processes controlling nutrient release for the time span observed were probably the result of a combination of several interacting transport mechanisms.

  17. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    PubMed

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  18. Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Volden, Thomas R.

    2012-01-01

    An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.

  19. A study of methods to predict and measure the transmission of sound through the walls of light aircraft

    NASA Technical Reports Server (NTRS)

    Bernhard, R. J.; Bolton, J. S.; Gardner, B.; Mickol, J.; Mollo, C.; Bruer, C.

    1986-01-01

    Progress was made in the following areas: development of a numerical/empirical noise source identification procedure using bondary element techniques; identification of structure-borne noise paths using structural intensity and finite element methods; development of a design optimization numerical procedure to be used to study active noise control in three-dimensional geometries; measurement of dynamic properties of acoustical foams and incorporation of these properties in models governing three-dimensional wave propagation in foams; and structure-borne sound path identification by use of the Wigner distribution.

  20. Spatial analysis on future housing markets: economic development and housing implications.

    PubMed

    Liu, Xin; Wang, Lizhe

    2014-01-01

    A coupled projection method combining formal modelling and other statistical techniques was developed to delineate the relationship between economic and social drivers for net new housing allocations. Using the example of employment growth in Tyne and Wear, UK, until 2016, the empirical analysis yields housing projections at the macro- and microspatial levels (e.g., region to subregion to elected ward levels). The results have important implications for the strategic planning of locations for housing and employment, demonstrating both intuitively and quantitatively how local economic developments affect housing demand.

  1. Spatial Analysis on Future Housing Markets: Economic Development and Housing Implications

    PubMed Central

    Liu, Xin; Wang, Lizhe

    2014-01-01

    A coupled projection method combining formal modelling and other statistical techniques was developed to delineate the relationship between economic and social drivers for net new housing allocations. Using the example of employment growth in Tyne and Wear, UK, until 2016, the empirical analysis yields housing projections at the macro- and microspatial levels (e.g., region to subregion to elected ward levels). The results have important implications for the strategic planning of locations for housing and employment, demonstrating both intuitively and quantitatively how local economic developments affect housing demand. PMID:24892097

  2. Media resistance skills and drug skill refusal techniques: What is their relationship with alcohol use among inner-city adolescents?

    PubMed

    Epstein, Jennifer A; Botvin, Gilbert J

    2008-04-01

    Past research related to alcohol advertising examined whether underage adolescents were targets of the alcohol industry and what impact such advertising had on adolescent drinking. The purpose of this study was to longitudinally examine the impact of media resistance skills on subsequent drinking among adolescents residing in inner-city regions of New York City. The study also tested whether drug skill refusal techniques (knowing how to say no to alcohol and other drugs) mediated the relationship between media resistance skills and adolescent drinking. A panel sample of baseline, one-year and two-year follow-ups (N=1318) from the control group of a longitudinal drug abuse prevention trial participated. A series of structural equations models showed that media resistance skills directly negatively predicted alcohol use 2 years later and that drug skill refusal techniques mediated this effect. Baseline media resistance skills were associated with one-year drug skill refusal techniques, which in turn negatively predicted two-year alcohol use. These findings provided empirical support for including media resistance skills and drug skill refusal techniques in alcohol prevention programs.

  3. Media Resistance Skills and Drug Skill Refusal Techniques: What is Their Relationship with Alcohol Use Among Inner-City Adolescents?

    PubMed Central

    Epstein, Jennifer A.; Botvin, Gilbert J.

    2008-01-01

    Past research related to alcohol advertising examined whether underage adolescents were targets of the alcohol industry and what impact such adverting had on adolescent drinking. The purpose of this study was to longitudinally examine the impact of media resistance skills on subsequent drinking among adolescents residing in inner-city regions of New York City. The study also tested whether drug skill refusal techniques (knowing how to say no to alcohol and other drugs) mediated the relationship between media resistance skills and adolescent drinking. A panel sample of baseline, 1-year and 2-year follow-ups (N = 1318) from the control group of a longitudinal drug abuse prevention trial participated. A series of structural equations models showed that media resistance skills directly negatively predicted alcohol use two years later and that drug skill refusal techniques mediated this effect. Baseline media resistance skills were associated with 1-year drug skill refusal techniques, which in turn negatively predicted 2-year alcohol use. These findings provided empirical support for including media resistance skills and drug skill refusal techniques in alcohol prevention programs. PMID:18164827

  4. Improved Orbit Determination and Forecasts with an Assimilative Tool for Atmospheric Density and Satellite Drag Specification

    NASA Astrophysics Data System (ADS)

    Crowley, G.; Pilinski, M.; Sutton, E. K.; Codrescu, M.; Fuller-Rowell, T. J.; Matsuo, T.; Fedrizzi, M.; Solomon, S. C.; Qian, L.; Thayer, J. P.

    2016-12-01

    Much as aircraft are affected by the prevailing winds and weather conditions in which they fly, satellites are affected by the variability in density and motion of the near earth space environment. Drastic changes in the neutral density of the thermosphere, caused by geomagnetic storms or other phenomena, result in perturbations of LEO satellite motions through drag on the satellite surfaces. This can lead to difficulties in locating important satellites, temporarily losing track of satellites, and errors when predicting collisions in space. We describe ongoing work to build a comprehensive nowcast and forecast system for specifying the neutral atmospheric state related to orbital drag conditions. The system outputs include neutral density, winds, temperature, composition, and the satellite drag derived from these parameters. This modeling tool is based on several state-of-the-art coupled models of the thermosphere-ionosphere as well as several empirical models running in real-time and uses assimilative techniques to produce a thermospheric nowcast. This software will also produce 72 hour predictions of the global thermosphere-ionosphere system using the nowcast as the initial condition and using near real-time and predicted space weather data and indices as the inputs. Features of this technique include: • Satellite drag specifications with errors lower than current models • Altitude coverage up to 1000km • Background state representation using both first principles and empirical models • Assimilation of satellite drag and other datatypes • Real time capability • Ability to produce 72-hour forecasts of the atmospheric state In this paper, we will summarize the model design and assimilative architecture, and present preliminary validation results. Validation results will be presented in the context of satellite orbit errors and compared with several leading atmospheric models including the High Accuracy Satellite Drag Model, which is currently used operationally by the Air Force to specify neutral densities. As part of the analysis, we compare the drag observed by a variety of satellites which were not used as part of the assimilation-dataset and whose perigee altitudes span a range from 200km to 700 km.

  5. Decision-making in healthcare: a practical application of partial least square path modelling to coverage of newborn screening programmes.

    PubMed

    Fischer, Katharina E

    2012-08-02

    Decision-making in healthcare is complex. Research on coverage decision-making has focused on comparative studies for several countries, statistical analyses for single decision-makers, the decision outcome and appraisal criteria. Accounting for decision processes extends the complexity, as they are multidimensional and process elements need to be regarded as latent constructs (composites) that are not observed directly. The objective of this study was to present a practical application of partial least square path modelling (PLS-PM) to evaluate how it offers a method for empirical analysis of decision-making in healthcare. Empirical approaches that applied PLS-PM to decision-making in healthcare were identified through a systematic literature search. PLS-PM was used as an estimation technique for a structural equation model that specified hypotheses between the components of decision processes and the reasonableness of decision-making in terms of medical, economic and other ethical criteria. The model was estimated for a sample of 55 coverage decisions on the extension of newborn screening programmes in Europe. Results were evaluated by standard reliability and validity measures for PLS-PM. After modification by dropping two indicators that showed poor measures in the measurement models' quality assessment and were not meaningful for newborn screening, the structural equation model estimation produced plausible results. The presence of three influences was supported: the links between both stakeholder participation or transparency and the reasonableness of decision-making; and the effect of transparency on the degree of scientific rigour of assessment. Reliable and valid measurement models were obtained to describe the composites of 'transparency', 'participation', 'scientific rigour' and 'reasonableness'. The structural equation model was among the first applications of PLS-PM to coverage decision-making. It allowed testing of hypotheses in situations where there are links between several non-observable constructs. PLS-PM was compatible in accounting for the complexity of coverage decisions to obtain a more realistic perspective for empirical analysis. The model specification can be used for hypothesis testing by using larger sample sizes and for data in the full domain of health technologies.

  6. Translating multilevel theory into multilevel research: Challenges and opportunities for understanding the social determinants of psychiatric disorders

    PubMed Central

    Dunn, Erin C.; Masyn, Katherine E.; Yudron, Monica; Jones, Stephanie M.; Subramanian, S.V.

    2014-01-01

    The observation that features of the social environment, including family, school, and neighborhood characteristics, are associated with individual-level outcomes has spurred the development of dozens of multilevel or ecological theoretical frameworks in epidemiology, public health, psychology, and sociology, among other disciplines. Despite the widespread use of such theories in etiological, intervention, and policy studies, challenges remain in bridging multilevel theory and empirical research. This paper set out to synthesize these challenges and provide specific examples of methodological and analytical strategies researchers are using to gain a more nuanced understanding of the social determinants of psychiatric disorders, with a focus on children’s mental health. To accomplish this goal, we begin by describing multilevel theories, defining their core elements, and discussing what these theories suggest is needed in empirical work. In the second part, we outline the main challenges researchers face in translating multilevel theory into research. These challenges are presented for each stage of the research process. In the third section, we describe two methods being used as alternatives to traditional multilevel modeling techniques to better bridge multilevel theory and multilevel research. These are: (1) multilevel factor analysis and multilevel structural equation modeling; and (2) dynamic systems approaches. Through its review of multilevel theory, assessment of existing strategies, and examination of emerging methodologies, this paper offers a framework to evaluate and guide empirical studies on the social determinants of child psychiatric disorders as well as health across the lifecourse. PMID:24469555

  7. Synoptic, Global Mhd Model For The Solar Corona

    NASA Astrophysics Data System (ADS)

    Cohen, Ofer; Sokolov, I. V.; Roussev, I. I.; Gombosi, T. I.

    2007-05-01

    The common techniques for mimic the solar corona heating and the solar wind acceleration in global MHD models are as follow. 1) Additional terms in the momentum and energy equations derived from the WKB approximation for the Alfv’en wave turbulence; 2) some empirical heat source in the energy equation; 3) a non-uniform distribution of the polytropic index, γ, used in the energy equation. In our model, we choose the latter approach. However, in order to get a more realistic distribution of γ, we use the empirical Wang-Sheeley-Arge (WSA) model to constrain the MHD solution. The WSA model provides the distribution of the asymptotic solar wind speed from the potential field approximation; therefore it also provides the distribution of the kinetic energy. Assuming that far from the Sun the total energy is dominated by the energy of the bulk motion and assuming the conservation of the Bernoulli integral, we can trace the total energy along a magnetic field line to the solar surface. On the surface the gravity is known and the kinetic energy is negligible. Therefore, we can get the surface distribution of γ as a function of the final speed originating from this point. By interpolation γ to spherically uniform value on the source surface, we use this spatial distribution of γ in the energy equation to obtain a self-consistent, steady state MHD solution for the solar corona. We present the model result for different Carrington Rotations.

  8. Parasitic gastro-enteritis in lambs — A model for estimating the timing of the larval emergence peak

    NASA Astrophysics Data System (ADS)

    Starr, J. R.; Thomas, R. J.

    1980-09-01

    The life history of the nematode parasites of domestic ruminants usually involves the development and survival of free-living stages on pasture. The pasture is, therefore, the site of deposition, development and transmission of nematode infection and meteorological factors affecting the pasture will affect the parasites. Recently Thomas and Starr (1978) discussed an empirical technique for forecasting the timing of the summer wave of gastro-intestinal parasitism in North-East England in the lamb crop using meteorological data and in particular estimates of the duration of “surface wetness”. This paper presents an attempt to model “surface wetness” and the temperature limitation to nematode development.

  9. Focal ratio degradation: a new perspective

    NASA Astrophysics Data System (ADS)

    Haynes, Dionne M.; Withford, Michael J.; Dawes, Judith M.; Haynes, Roger; Bland-Hawthorn, Joss

    2008-07-01

    We have developed an alternative FRD empirical model for the parallel laser beam technique which can accommodate contributions from both scattering and modal diffusion. It is consistent with scattering inducing a Lorentzian contribution and modal diffusion inducing a Gaussian contribution. The convolution of these two functions produces a Voigt function which is shown to better simulate the observed behavior of the FRD distribution and provides a greatly improved fit over the standard Gaussian fitting approach. The Voigt model can also be used to quantify the amount of energy displaced by FRD, therefore allowing astronomical instrument scientists to identify, quantify and potentially minimize the various sources of FRD, and optimise the fiber and instrument performance.

  10. Numerical Simulation of Ballistic Impact on Particulate Composite Target using Discrete Element Method: 1-D and 2-D Models

    NASA Astrophysics Data System (ADS)

    Nair, Rajesh P.; Lakshmana Rao, C.

    2014-01-01

    Ballistic impact (BI) is a study that deals with a projectile hitting a target and observing its effects in terms of deformation and fragmentation of the target. The Discrete Element Method (DEM) is a powerful numerical technique used to model solid and particulate media. Here, an attempt is made to simulate the BI process using DEM. 1-D DEM for BI is developed and depth of penetration (DOP) is obtained. The DOP is compared with results obtained from 2-D DEM. DEM results are found to match empirical results. Effects of strain rate sensitivity of the material response on DOP are also simulated.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramirez, A. P. D.; Vanhoy, J. R.; Hicks, S. F.

    Elastic and inelastic differential cross sections for neutron scattering from 56Fe have been measured for several incident energies from 1.30 to 7.96 MeV at the University of Kentucky Accelerator Laboratory. Scattered neutrons were detected using a C 6D 6 liquid scintillation detector using pulse-shape discrimination and time-of-flight techniques. The deduced cross sections have been compared with previously reported data, predictions from evaluation databases ENDF, JENDL, and JEFF, and theoretical calculations performed using different optical model potentials using the TALYS and EMPIRE nuclear reaction codes. The coupled-channel calculations based on the vibrational and soft-rotor models are found to describe the experimentalmore » (n,n 0) and (n,n 1) cross sections well.« less

  12. Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendationsmore » on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.« less

  13. Predicted seafloor facies of Central Santa Monica Bay, California

    USGS Publications Warehouse

    Dartnell, Peter; Gardner, James V.

    2004-01-01

    Summary -- Mapping surficial seafloor facies (sand, silt, muddy sand, rock, etc.) should be the first step in marine geological studies and is crucial when modeling sediment processes, pollution transport, deciphering tectonics, and defining benthic habitats. This report outlines an empirical technique that predicts the distribution of seafloor facies for a large area offshore Los Angeles, CA using high-resolution bathymetry and co-registered, calibrated backscatter from multibeam echosounders (MBES) correlated to ground-truth sediment samples. The technique uses a series of procedures that involve supervised classification and a hierarchical decision tree classification that are now available in advanced image-analysis software packages. Derivative variance images of both bathymetry and acoustic backscatter are calculated from the MBES data and then used in a hierarchical decision-tree framework to classify the MBES data into areas of rock, gravelly muddy sand, muddy sand, and mud. A quantitative accuracy assessment on the classification results is performed using ground-truth sediment samples. The predicted facies map is also ground-truthed using seafloor photographs and high-resolution sub-bottom seismic-reflection profiles. This Open-File Report contains the predicted seafloor facies map as a georeferenced TIFF image along with the multibeam bathymetry and acoustic backscatter data used in the study as well as an explanation of the empirical classification process.

  14. Consideration of VT5 etch-based OPC modeling

    NASA Astrophysics Data System (ADS)

    Lim, ChinTeong; Temchenko, Vlad; Kaiser, Dieter; Meusel, Ingo; Schmidt, Sebastian; Schneider, Jens; Niehoff, Martin

    2008-03-01

    Including etch-based empirical data during OPC model calibration is a desired yet controversial decision for OPC modeling, especially for process with a large litho to etch biasing. While many OPC software tools are capable of providing this functionality nowadays; yet few were implemented in manufacturing due to various risks considerations such as compromises in resist and optical effects prediction, etch model accuracy or even runtime concern. Conventional method of applying rule-based alongside resist model is popular but requires a lot of lengthy code generation to provide a leaner OPC input. This work discusses risk factors and their considerations, together with introduction of techniques used within Mentor Calibre VT5 etch-based modeling at sub 90nm technology node. Various strategies are discussed with the aim of better handling of large etch bias offset without adding complexity into final OPC package. Finally, results were presented to assess the advantages and limitations of the final method chosen.

  15. Deriving Criteria-supporting Benchmark Values from Empirical Response Relationships: Comparison of Statistical Techniques and Effect of Log-transforming the Nutrient Variable

    EPA Science Inventory

    In analyses supporting the development of numeric nutrient criteria, multiple statistical techniques can be used to extract critical values from stressor response relationships. However there is little guidance for choosing among techniques, and the extent to which log-transfor...

  16. Integration of different data gap filling techniques to facilitate assessment of polychlorinated biphenyls: A proof of principle case study (ASCCT meeting)

    EPA Science Inventory

    Data gap filling techniques are commonly used to predict hazard in the absence of empirical data. The most established techniques are read-across, trend analysis and quantitative structure-activity relationships (QSARs). Toxic equivalency factors (TEFs) are less frequently used d...

  17. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    NASA Astrophysics Data System (ADS)

    Han, G.; Lin, B.; Xu, Z.

    2017-03-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  18. Coupled latent differential equation with moderators: simulation and application.

    PubMed

    Hu, Yueqin; Boker, Steve; Neale, Michael; Klump, Kelly L

    2014-03-01

    Latent differential equations (LDE) use differential equations to analyze time series data. Because of the recent development of this technique, some issues critical to running an LDE model remain. In this article, the authors provide solutions to some of these issues and recommend a step-by-step procedure demonstrated on a set of empirical data, which models the interaction between ovarian hormone cycles and emotional eating. Results indicated that emotional eating is self-regulated. For instance, when people do more emotional eating than normal, they will subsequently tend to decrease their emotional eating behavior. In addition, a sudden increase will produce a stronger tendency to decrease than will a slow increase. We also found that emotional eating is coupled with the cycle of the ovarian hormone estradiol, and the peak of emotional eating occurs after the peak of estradiol. The self-reported average level of negative affect moderates the frequency of eating regulation and the coupling strength between eating and estradiol. Thus, people with a higher average level of negative affect tend to fluctuate faster in emotional eating, and their eating behavior is more strongly coupled with the hormone estradiol. Permutation tests on these empirical data supported the reliability of using LDE models to detect self-regulation and a coupling effect between two regulatory behaviors. (c) 2014 APA, all rights reserved.

  19. Leader dark traits, workplace bullying, and employee depression: Exploring mediation and the role of the dark core.

    PubMed

    Tokarev, Alexander; Phillips, Abigail R; Hughes, David J; Irwing, Paul

    2017-10-01

    A growing body of empirical evidence now supports a negative association between dark traits in leaders and the psychological health of employees. To date, such investigations have mostly focused on psychopathy, nonspecific measures of psychological wellbeing, and have not considered the mechanisms through which these relationships might operate. In the current study (N = 508), we utilized other-ratings of personality (employees rated leaders' personality), psychometrically robust measures, and sophisticated modeling techniques, to examine whether the effects of leaders' levels of narcissism and psychopathy on employee depression are mediated by workplace bullying. Structural equation models provided clear evidence to suggest that employee perceptions of both leader narcissism and psychopathy are associated with increased workplace bullying (25.8% and 41.0% variance explained, respectively) and that workplace bullying fully mediates the effect of leader narcissism and psychopathy on employee depression (21.5% and 20.8% variance explained, respectively). However, when psychopathy and narcissism were modeled concurrently, narcissism did not explain any variance in bullying, suggesting that it is the overlap between psychopathy and narcissism, namely, the "dark core," which primarily accounts for the observed effects. We examined this assertion empirically and explored the unique effects of the subfactors of psychopathy. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. New experimental approaches to the biology of flight control systems.

    PubMed

    Taylor, Graham K; Bacic, Marko; Bomphrey, Richard J; Carruthers, Anna C; Gillies, James; Walker, Simon M; Thomas, Adrian L R

    2008-01-01

    Here we consider how new experimental approaches in biomechanics can be used to attain a systems-level understanding of the dynamics of animal flight control. Our aim in this paper is not to provide detailed results and analysis, but rather to tackle several conceptual and methodological issues that have stood in the way of experimentalists in achieving this goal, and to offer tools for overcoming these. We begin by discussing the interplay between analytical and empirical methods, emphasizing that the structure of the models we use to analyse flight control dictates the empirical measurements we must make in order to parameterize them. We then provide a conceptual overview of tethered-flight paradigms, comparing classical ;open-loop' and ;closed-loop' setups, and describe a flight simulator that we have recently developed for making flight dynamics measurements on tethered insects. Next, we provide a conceptual overview of free-flight paradigms, focusing on the need to use system identification techniques in order to analyse the data they provide, and describe two new techniques that we have developed for making flight dynamics measurements on freely flying birds. First, we describe a technique for obtaining inertial measurements of the orientation, angular velocity and acceleration of a steppe eagle Aquila nipalensis in wide-ranging free flight, together with synchronized measurements of wing and tail kinematics using onboard instrumentation and video cameras. Second, we describe a photogrammetric method to measure the 3D wing kinematics of the eagle during take-off and landing. In each case, we provide demonstration data to illustrate the kinds of information available from each method. We conclude by discussing the prospects for systems-level analyses of flight control using these techniques and others like them.

  1. Use of a Computer-Mediated Delphi Process to Validate a Mass Casualty Conceptual Model

    PubMed Central

    CULLEY, JOAN M.

    2012-01-01

    Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters. PMID:21076283

  2. Use of a computer-mediated Delphi process to validate a mass casualty conceptual model.

    PubMed

    Culley, Joan M

    2011-05-01

    Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters.

  3. Semi-empirical long-term cycle life model coupled with an electrolyte depletion function for large-format graphite/LiFePO4 lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min

    2017-10-01

    To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.

  4. Empirical Modeling of Physiochemical Immune Response of Multilayer Zinc Oxide Nanomaterials under UV Exposure to Melanoma and Foreskin Fibroblasts

    NASA Astrophysics Data System (ADS)

    Fakhar-E-Alam, Muhammad; Akram, M. Waseem; Iqbal, Seemab; Alimgeer, K. S.; Atif, M.; Sultana, K.; Willander, M.; Wang, Zhiming M.

    2017-04-01

    Carcinogenesis is a complex molecular process starting with genetic and epigenetic alterations, mutation stimulation, and DNA modification, which leads to proteomic adaptation ending with an uncontrolled proliferation mechanism. The current research focused on the empirical modelling of the physiological response of human melanoma cells (FM55P) and human foreskin fibroblasts cells (AG01518) to the multilayer zinc oxide (ZnO) nanomaterials under UV-A exposure. To validate this experimental scheme, multilayer ZnO nanomaterials were grown on a femtotip silver capillary and conjugated with protoporphyrin IX (PpIX). Furthermore, PpIX-conjugated ZnO nanomaterials grown on the probe were inserted into human melanoma (FM55P) and foreskin fibroblasts cells (AG01518) under UV-A light exposure. Interestingly, significant cell necrosis was observed because of a loss in mitochondrial membrane potential just after insertion of the femtotip tool. Intense reactive oxygen species (ROS) fluorescence was observed after exposure to the ZnO NWs conjugated with PpIX femtotip model under UV exposure. Results were verified by applying several experimental techniques, e.g., ROS detection, MTT assay, and fluorescence spectroscopy. The present work reports experimental modelling of cell necrosis in normal human skin as well as a cancerous tissue. These obtained results pave the way for a more rational strategy for biomedical and clinical applications.

  5. Covariate adjustment of event histories estimated from Markov chains: the additive approach.

    PubMed

    Aalen, O O; Borgan, O; Fekjaer, H

    2001-12-01

    Markov chain models are frequently used for studying event histories that include transitions between several states. An empirical transition matrix for nonhomogeneous Markov chains has previously been developed, including a detailed statistical theory based on counting processes and martingales. In this article, we show how to estimate transition probabilities dependent on covariates. This technique may, e.g., be used for making estimates of individual prognosis in epidemiological or clinical studies. The covariates are included through nonparametric additive models on the transition intensities of the Markov chain. The additive model allows for estimation of covariate-dependent transition intensities, and again a detailed theory exists based on counting processes. The martingale setting now allows for a very natural combination of the empirical transition matrix and the additive model, resulting in estimates that can be expressed as stochastic integrals, and hence their properties are easily evaluated. Two medical examples will be given. In the first example, we study how the lung cancer mortality of uranium miners depends on smoking and radon exposure. In the second example, we study how the probability of being in response depends on patient group and prophylactic treatment for leukemia patients who have had a bone marrow transplantation. A program in R and S-PLUS that can carry out the analyses described here has been developed and is freely available on the Internet.

  6. Monsoons: Processes, predictability, and the prospects for prediction

    NASA Astrophysics Data System (ADS)

    Webster, P. J.; Magaña, V. O.; Palmer, T. N.; Shukla, J.; Thomas, R. A.; Yanai, M.; Yasunari, T.

    1998-06-01

    The Tropical Ocean-Global Atmosphere (TOGA) program sought to determine the predictability of the coupled ocean-atmosphere system. The World Climate Research Programme's (WCRP) Global Ocean-Atmosphere-Land System (GOALS) program seeks to explore predictability of the global climate system through investigation of the major planetary heat sources and sinks, and interactions between them. The Asian-Australian monsoon system, which undergoes aperiodic and high amplitude variations on intraseasonal, annual, biennial and interannual timescales is a major focus of GOALS. Empirical seasonal forecasts of the monsoon have been made with moderate success for over 100 years. More recent modeling efforts have not been successful. Even simulation of the mean structure of the Asian monsoon has proven elusive and the observed ENSO-monsoon relationships has been difficult to replicate. Divergence in simulation skill occurs between integrations by different models or between members of ensembles of the same model. This degree of spread is surprising given the relative success of empirical forecast techniques. Two possible explanations are presented: difficulty in modeling the monsoon regions and nonlinear error growth due to regional hydrodynamical instabilities. It is argued that the reconciliation of these explanations is imperative for prediction of the monsoon to be improved. To this end, a thorough description of observed monsoon variability and the physical processes that are thought to be important is presented. Prospects of improving prediction and some strategies that may help achieve improvement are discussed.

  7. Average dispersal success: linking home range, dispersal, and metapopulation dynamics to reserve design.

    PubMed

    Fagan, William F; Lutscher, Frithjof

    2006-04-01

    Spatially explicit models for populations are often difficult to tackle mathematically and, in addition, require detailed data on individual movement behavior that are not easily obtained. An approximation known as the "average dispersal success" provides a tool for converting complex models, which may include stage structure and a mechanistic description of dispersal, into a simple matrix model. This simpler matrix model has two key advantages. First, it is easier to parameterize from the types of empirical data typically available to conservation biologists, such as survivorship, fecundity, and the fraction of juveniles produced in a study area that also recruit within the study area. Second, it is more amenable to theoretical investigation. Here, we use the average dispersal success approximation to develop estimates of the critical reserve size for systems comprising single patches or simple metapopulations. The quantitative approach can be used for both plants and animals; however, to provide a concrete example of the technique's utility, we focus on a special case pertinent to animals. Specifically, for territorial animals, we can characterize such an estimate of minimum viable habitat area in terms of the number of home ranges that the reserve contains. Consequently, the average dispersal success framework provides a framework through which home range size, natal dispersal distances, and metapopulation dynamics can be linked to reserve design. We briefly illustrate the approach using empirical data for the swift fox (Vulpes velox).

  8. AN EVALUATION OF THREE EMPIRICAL AIR-TO-LEAF MODELS FOR POLYCHLORINATED DIBENZO-P-DIOXINS AND DIBENZOFURANS

    EPA Science Inventory

    Three empirical air-to-leaf models for estimating grass concentrations of polychlorinated dibenzo-p-dioxins and dibenzofurans (abbreviated dioxins and furans) from air concentrations of these compounds are described and tested against two field data sets. All are empirical in th...

  9. Empirical agreement in model validation.

    PubMed

    Jebeile, Julie; Barberousse, Anouk

    2016-04-01

    Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. An empirical analysis of the corporate call decision

    NASA Astrophysics Data System (ADS)

    Carlson, Murray Dean

    1998-12-01

    In this thesis we provide insights into the behavior of financial managers of utility companies by studying their decisions to redeem callable preferred shares. In particular, we investigate whether or not an option pricing based model of the call decision, with managers who maximize shareholder value, does a better job of explaining callable preferred share prices and call decisions than do other models of the decision. In order to perform these tests, we extend an empirical technique introduced by Rust (1987) to include the use of information from preferred share prices in addition to the call decisions. The model we develop to value the option embedded in a callable preferred share differs from standard models in two ways. First, as suggested in Kraus (1983), we explicitly account for transaction costs associated with a redemption. Second, we account for state variables that are observed by the decision makers but not by the preferred shareholders. We interpret these unobservable state variables as the benefits and costs associated with a change in capital structure that can accompany a call decision. When we add this variable, our empirical model changes from one which predicts exactly when a share should be called to one which predicts the probability of a call as the function of the observable state. These two modifications of the standard model result in predictions of calls, and therefore of callable preferred share prices, that are consistent with several previously unexplained features of the data; we show that the predictive power of the model is improved in a statistical sense by adding these features to the model. The pricing and call probability functions from our model do a good job of describing call decisions and preferred share prices for several utilities. Using data from shares of the Pacific Gas and Electric Co. (PGE) we obtain reasonable estimates for the transaction costs associated with a call. Using a formal empirical test, we are able to conclude that the managers of the Pacific Gas and Electric Company clearly take into account the value of the option to delay the call when making their call decisions. Overall, the model seems to be robust to tests of its specification and does a better job of describing the data than do simpler models of the decision making process. Limitations in the data do not allow us to perform the same tests in a larger cross-section of utility companies. However, we are able to estimate transaction cost parameters for many firms and these do not seem to vary significantly from those of PGE. This evidence does not cause us to reject our hypothesis that managerial behavior is consistent with a model in which managers maximize shareholder value.

  11. A Prototype Physical Database for Passive Microwave Retrievals of Precipitation over the US Southern Great Plains

    NASA Technical Reports Server (NTRS)

    Ringerud, S.; Kummerow, C. D.; Peters-Lidard, C. D.

    2015-01-01

    An accurate understanding of the instantaneous, dynamic land surface emissivity is necessary for a physically based, multi-channel passive microwave precipitation retrieval scheme over land. In an effort to assess the feasibility of the physical approach for land surfaces, a semi-empirical emissivity model is applied for calculation of the surface component in a test area of the US Southern Great Plains. A physical emissivity model, using land surface model data as input, is used to calculate emissivity at the 10GHz frequency, combining contributions from the underlying soil and vegetation layers, including the dielectric and roughness effects of each medium. An empirical technique is then applied, based upon a robust set of observed channel covariances, extending the emissivity calculations to all channels. For calculation of the hydrometeor contribution, reflectivity profiles from the Tropical Rainfall Measurement Mission Precipitation Radar (TRMM PR) are utilized along with coincident brightness temperatures (Tbs) from the TRMM Microwave Imager (TMI), and cloud-resolving model profiles. Ice profiles are modified to be consistent with the higher frequency microwave Tbs. Resulting modeled top of the atmosphere Tbs show correlations to observations of 0.9, biases of 1K or less, root-mean-square errors on the order of 5K, and improved agreement over the use of climatological emissivity values. The synthesis of these models and data sets leads to the creation of a simple prototype Tb database that includes both dynamic surface and atmospheric information physically consistent with the land surface model, emissivity model, and atmospheric information.

  12. Seasonal Course of Boreal Forest Reflectance

    NASA Astrophysics Data System (ADS)

    Rautiainen, M.; Heiskanen, J.; Mottus, M.; Eigemeier, E.; Majasalmi, T.; Vesanto, V.; Stenberg, P.

    2011-12-01

    According to the IPCC 2007 report, northern ecosystems are especially likely to be affected by climate change. Therefore, understanding the seasonal dynamics of boreal ecosystems and linking their phenological phases to satellite reflectance data is crucial for the efficient monitoring and modeling of northern hemisphere vegetation dynamics and productivity trends in the future. The seasonal reflectance course of a boreal forest is a result of the temporal cycle in optical properties of both the tree canopy and understory layers. Seasonal reflectance changes of the two layers are explained by the complex combination of changes in biochemistry and geometrical structure of different plant species as well as seasonal and diurnal variation in solar illumination. Analyzing the role of each of the contributing factors can only be achieved by linking radiative transfer modeling to empirical reflectance data sets. The aim of our project is to identify the seasonal reflectance changes and their driving factors in boreal forests from optical satellite images using new forest reflectance modeling techniques based on the spectral invariants theory. We have measured an extensive ground reference database on the seasonal changes of structural and optical properties of tree canopy and understory layers for a boreal forest site in central Finland in 2010. The database is complemented by a concurrent time series of Hyperion and SPOT satellite images. We use the empirical ground reference database as input to forest reflectance simulations and validate our simulation results using the empirical reflectance data obtained from satellite images. Based on our simulation results, we quantify 1) the driving factors influencing the seasonal reflectance courses of a boreal forest, and 2) the relative contribution of the understory and tree-level layers to forest reflectance as the growing season proceeds.

  13. Fatigue crack propagation behavior of stainless steel welds

    NASA Astrophysics Data System (ADS)

    Kusko, Chad S.

    The fatigue crack propagation behavior of austenitic and duplex stainless steel base and weld metals has been investigated using various fatigue crack growth test procedures, ferrite measurement techniques, light optical microscopy, stereomicroscopy, scanning electron microscopy, and optical profilometry. The compliance offset method has been incorporated to measure crack closure during testing in order to determine a stress ratio at which such closure is overcome. Based on this method, an empirically determined stress ratio of 0.60 has been shown to be very successful in overcoming crack closure for all da/dN for gas metal arc and laser welds. This empirically-determined stress ratio of 0.60 has been applied to testing of stainless steel base metal and weld metal to understand the influence of microstructure. Regarding the base metal investigation, for 316L and AL6XN base metals, grain size and grain plus twin size have been shown to influence resulting crack growth behavior. The cyclic plastic zone size model has been applied to accurately model crack growth behavior for austenitic stainless steels when the average grain plus twin size is considered. Additionally, the effect of the tortuous crack paths observed for the larger grain size base metals can be explained by a literature model for crack deflection. Constant Delta K testing has been used to characterize the crack growth behavior across various regions of the gas metal arc and laser welds at the empirically determined stress ratio of 0.60. Despite an extensive range of stainless steel weld metal FN and delta-ferrite morphologies, neither delta-ferrite morphology significantly influence the room temperature crack growth behavior. However, variations in weld metal da/dN can be explained by local surface roughness resulting from large columnar grains and tortuous crack paths in the weld metal.

  14. Coastal geomorphology through the looking glass

    NASA Astrophysics Data System (ADS)

    Sherman, Douglas J.; Bauer, Bernard O.

    1993-07-01

    Coastal geomorphology will gain future prominence as environmentally sound coastal zone management strategies, requiring scientific information, begin to supplant engineered shoreline stabilization schemes for amelioration of coastal hazards. We anticipate substantial change and progress over the next two decades, but we do not predict revolutionary advances in theoretical understanding of coastal geomorphic systems. Paradigm shifts will not occur; knowledge will advance incrementally. We offer predictions for specific coastal systems delineated according to scale. For the surf zone, we predict advances in wave shoaling theory, but not for wave breaking. We also predict greater understanding of turbulent processes, and substantive improvements in surf-zone circulation and radiation stress models. Very few of these improvements are expected to be incorporated in geomorphic models of coastal processes. We do not envision improvements in the theory of sediment transport, although some new and exciting empirical observations are probable. At the beach and nearshore scale, we predict the development of theoretically-based, two- and three-dimensional morphodynamical models that account for non-linear, time-dependent feedback processes using empirically calibrated modules. Most of the geomorphic research effort, however, will be concentrated at the scale of littoral cells. This scale is appropriate for coastal zone management because processes at this scale are manageable using traditional geomorphic techniques. At the largest scale, little advance will occur in our understanding of how coastlines evolve. Any empirical knowledge that is gained will accrue indirectly. Finally, we contend that anthropogenic influences, directly and indirectly, will be powerful forces in steering the future of Coastal Geomorphology. "If you should suddenly feel the need for a lesson in humility, try forecasting the future…" (Kleppner, 1991, p. 10).

  15. Decision-making in healthcare: a practical application of partial least square path modelling to coverage of newborn screening programmes

    PubMed Central

    2012-01-01

    Background Decision-making in healthcare is complex. Research on coverage decision-making has focused on comparative studies for several countries, statistical analyses for single decision-makers, the decision outcome and appraisal criteria. Accounting for decision processes extends the complexity, as they are multidimensional and process elements need to be regarded as latent constructs (composites) that are not observed directly. The objective of this study was to present a practical application of partial least square path modelling (PLS-PM) to evaluate how it offers a method for empirical analysis of decision-making in healthcare. Methods Empirical approaches that applied PLS-PM to decision-making in healthcare were identified through a systematic literature search. PLS-PM was used as an estimation technique for a structural equation model that specified hypotheses between the components of decision processes and the reasonableness of decision-making in terms of medical, economic and other ethical criteria. The model was estimated for a sample of 55 coverage decisions on the extension of newborn screening programmes in Europe. Results were evaluated by standard reliability and validity measures for PLS-PM. Results After modification by dropping two indicators that showed poor measures in the measurement models’ quality assessment and were not meaningful for newborn screening, the structural equation model estimation produced plausible results. The presence of three influences was supported: the links between both stakeholder participation or transparency and the reasonableness of decision-making; and the effect of transparency on the degree of scientific rigour of assessment. Reliable and valid measurement models were obtained to describe the composites of ‘transparency’, ‘participation’, ‘scientific rigour’ and ‘reasonableness’. Conclusions The structural equation model was among the first applications of PLS-PM to coverage decision-making. It allowed testing of hypotheses in situations where there are links between several non-observable constructs. PLS-PM was compatible in accounting for the complexity of coverage decisions to obtain a more realistic perspective for empirical analysis. The model specification can be used for hypothesis testing by using larger sample sizes and for data in the full domain of health technologies. PMID:22856325

  16. Designing Caregiver-Implemented Shared-Reading Interventions to Overcome Implementation Barriers

    PubMed Central

    Logan, Jessica R.; Damschroder, Laura

    2015-01-01

    Purpose This study presents an application of the theoretical domains framework (TDF; Michie et al., 2005), an integrative framework drawing on behavior-change theories, to speech-language pathology. Methods A multistep procedure was used to identify barriers affecting caregivers' implementation of shared-reading interventions with their children with language impairment (LI). The authors examined caregiver-level data corresponding to implementation issues from two randomized controlled trials and mapped these to domains in the TDF as well as empirically validated behavior-change techniques. Results Four barriers to implementation were identified as potentially affecting caregivers' implementation: time pressures, reading difficulties, discomfort with reading, and lack of awareness of benefits. These were mapped to 3 TDF domains: intentions, beliefs about capabilities, and skills. In turn, 4 behavior-change techniques were identified as potential vehicles for affecting these domains: reward, feedback, model, and encourage. An ongoing study is described that is determining the effects of these techniques for improving caregivers' implementation of a shared-reading intervention. Conclusions A description of the steps to identifying barriers to implementation, in conjunction with an ongoing experiment that will explicitly determine whether behavior-change techniques affect these barriers, provides a model for how implementation science can be used to identify and overcome implementation barriers in the treatment of communication disorders. PMID:26262941

  17. Designing Caregiver-Implemented Shared-Reading Interventions to Overcome Implementation Barriers.

    PubMed

    Justice, Laura M; Logan, Jessica R; Damschroder, Laura

    2015-12-01

    This study presents an application of the theoretical domains framework (TDF; Michie et al., 2005), an integrative framework drawing on behavior-change theories, to speech-language pathology. A multistep procedure was used to identify barriers affecting caregivers' implementation of shared-reading interventions with their children with language impairment (LI). The authors examined caregiver-level data corresponding to implementation issues from two randomized controlled trials and mapped these to domains in the TDF as well as empirically validated behavior-change techniques. Four barriers to implementation were identified as potentially affecting caregivers' implementation: time pressures, reading difficulties, discomfort with reading, and lack of awareness of benefits. These were mapped to 3 TDF domains: intentions, beliefs about capabilities, and skills. In turn, 4 behavior-change techniques were identified as potential vehicles for affecting these domains: reward, feedback, model, and encourage. An ongoing study is described that is determining the effects of these techniques for improving caregivers' implementation of a shared-reading intervention. A description of the steps to identifying barriers to implementation, in conjunction with an ongoing experiment that will explicitly determine whether behavior-change techniques affect these barriers, provides a model for how implementation science can be used to identify and overcome implementation barriers in the treatment of communication disorders.

  18. Integration of different data gap filling techniques to facilitate ...

    EPA Pesticide Factsheets

    Data gap filling techniques are commonly used to predict hazard in the absence of empirical data. The most established techniques are read-across, trend analysis and quantitative structure-activity relationships (QSARs). Toxic equivalency factors (TEFs) are less frequently used data gap filling techniques which are applied to estimate relative potencies for mixtures of chemicals that contribute to an adverse outcome through a common biological target. For example, The TEF approach has been used for dioxin-like effects comparing individual chemical activity to that of the most toxic dioxin: 2,3,7,8-tetrachlorodibenzo-p-dioxin. The aim of this case study was to determine whether integration of two data gap filling techniques: QSARs and TEFs improved the predictive outcome for the assessment of a set of polychlorinated biphenyl (PCB) congeners and their mixtures. PCBs are associated with many different adverse effects, including their potential for neurotoxicity, which is the endpoint of interest in this study. The dataset comprised 209 PCB congeners, out of which 87 altered in vitro Ca(2+) homeostasis from which neurotoxic equivalency values (NEQs) were derived. The preliminary objective of this case study was to develop a QSAR model to predict NEQ values for the 122 untested PCB congeners. A decision tree model was developed using the number of position specific chlorine substitutions on the biphenyl scaffold as a fingerprint descriptor. Three different positiona

  19. Predicting the effectiveness of different mulching techniques in reducing post-fire runoff and erosion at plot scale with the RUSLE, MMF and PESERA models.

    PubMed

    Vieira, D C S; Serpa, D; Nunes, J P C; Prats, S A; Neves, R; Keizer, J J

    2018-08-01

    Wildfires have become a recurrent threat for many Mediterranean forest ecosystems. The characteristics of the Mediterranean climate, with its warm and dry summers and mild and wet winters, make this a region prone to wildfire occurrence as well as to post-fire soil erosion. This threat is expected to be aggravated in the future due to climate change and land management practices and planning. The wide recognition of wildfires as a driver for runoff and erosion in burnt forest areas has created a strong demand for model-based tools for predicting the post-fire hydrological and erosion response and, in particular, for predicting the effectiveness of post-fire management operations to mitigate these responses. In this study, the effectiveness of two post-fire treatments (hydromulch and natural pine needle mulch) in reducing post-fire runoff and soil erosion was evaluated against control conditions (i.e. untreated conditions), at different spatial scales. The main objective of this study was to use field data to evaluate the ability of different erosion models: (i) empirical (RUSLE), (ii) semi-empirical (MMF), and (iii) physically-based (PESERA), to predict the hydrological and erosive response as well as the effectiveness of different mulching techniques in fire-affected areas. The results of this study showed that all three models were reasonably able to reproduce the hydrological and erosive processes occurring in burned forest areas. In addition, it was demonstrated that the models can be calibrated at a small spatial scale (0.5 m 2 ) but provide accurate results at greater spatial scales (10 m 2 ). From this work, the RUSLE model seems to be ideal for fast and simple applications (i.e. prioritization of areas-at-risk) mainly due to its simplicity and reduced data requirements. On the other hand, the more complex MMF and PESERA models would be valuable as a base of a possible tool for assessing the risk of water contamination in fire-affected water bodies and for testing different land management scenarios. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Predicting mining activity with parallel genetic algorithms

    USGS Publications Warehouse

    Talaie, S.; Leigh, R.; Louis, S.J.; Raines, G.L.; Beyer, H.G.; O'Reilly, U.M.; Banzhaf, Arnold D.; Blum, W.; Bonabeau, C.; Cantu-Paz, E.W.; ,; ,

    2005-01-01

    We explore several different techniques in our quest to improve the overall model performance of a genetic algorithm calibrated probabilistic cellular automata. We use the Kappa statistic to measure correlation between ground truth data and data predicted by the model. Within the genetic algorithm, we introduce a new evaluation function sensitive to spatial correctness and we explore the idea of evolving different rule parameters for different subregions of the land. We reduce the time required to run a simulation from 6 hours to 10 minutes by parallelizing the code and employing a 10-node cluster. Our empirical results suggest that using the spatially sensitive evaluation function does indeed improve the performance of the model and our preliminary results also show that evolving different rule parameters for different regions tends to improve overall model performance. Copyright 2005 ACM.

  1. On the effect of response transformations in sequential parameter optimization.

    PubMed

    Wagner, Tobias; Wessing, Simon

    2012-01-01

    Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.

  2. An empirical model for ocean radar backscatter and its application in inversion routine to eliminate wind speed and direction effects

    NASA Technical Reports Server (NTRS)

    Dome, G. J.; Fung, A. K.; Moore, R. K.

    1977-01-01

    Several regression models were tested to explain the wind direction dependence of the 1975 JONSWAP (Joint North Sea Wave Project) scatterometer data. The models consider the radar backscatter as a harmonic function of wind direction. The constant term accounts for the major effect of wind speed and the sinusoidal terms for the effects of direction. The fundamental accounts for the difference in upwind and downwind returns, while the second harmonic explains the upwind-crosswind difference. It is shown that a second harmonic model appears to adequately explain the angular variation. A simple inversion technique, which uses two orthogonal scattering measurements, is also described which eliminates the effect of wind speed and direction. Vertical polarization was shown to be more effective in determining both wind speed and direction than horizontal polarization.

  3. Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.

    PubMed

    Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko

    2016-03-01

    In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Exploring the creation of learner-centered e-training environments among retail workers: a model development perspective.

    PubMed

    Byun, Sookeun; Mills, Juline E

    2011-01-01

    Current business leaders continue to adopt e-learning technology despite concerns regarding its value. Positing that the effectiveness of e-training depends on how its environment is managed, we argue that a learner-centric approach is necessary in order to achieve workplace training goals. We subsequently develop a theoretical model that is aimed at identifying the key components of learner-centered e-training environments, which serve the function of providing a benchmarked approach for evaluating e-training success. The model was empirically tested using data from an Internet survey of retail industry employees and partial least squares techniques were used for analysis. Based on the findings, this study clarifies what is needed for successful e-training in terms of instructional design, system design, and organizational support.

  5. Spatial structure, sampling design and scale in remotely-sensed imagery of a California savanna woodland

    NASA Technical Reports Server (NTRS)

    Mcgwire, K.; Friedl, M.; Estes, J. E.

    1993-01-01

    This article describes research related to sampling techniques for establishing linear relations between land surface parameters and remotely-sensed data. Predictive relations are estimated between percentage tree cover in a savanna environment and a normalized difference vegetation index (NDVI) derived from the Thematic Mapper sensor. Spatial autocorrelation in original measurements and regression residuals is examined using semi-variogram analysis at several spatial resolutions. Sampling schemes are then tested to examine the effects of autocorrelation on predictive linear models in cases of small sample sizes. Regression models between image and ground data are affected by the spatial resolution of analysis. Reducing the influence of spatial autocorrelation by enforcing minimum distances between samples may also improve empirical models which relate ground parameters to satellite data.

  6. Evolution of Western Mediterranean Sea Surface Temperature between 1985 and 2005: a complementary study in situ, satellite and modelling approaches

    NASA Astrophysics Data System (ADS)

    Troupin, C.; Lenartz, F.; Sirjacobs, D.; Alvera-Azcárate, A.; Barth, A.; Ouberdous, M.; Beckers, J.-M.

    2009-04-01

    In order to evaluate the variability of the sea surface temperature (SST) in the Western Mediterranean Sea between 1985 and 2005, an integrated approach combining geostatistical tools and modelling techniques has been set up. The objectives are: underline the capability of each tool to capture characteristic phenomena, compare and assess the quality of their outputs, infer an interannual trend from the results. Diva (Data Interpolating Variationnal Analysis, Brasseur et al. (1996) Deep-Sea Res.) was applied on a collection of in situ data gathered from various sources (World Ocean Database 2005, Hydrobase2, Coriolis and MedAtlas2), from which duplicates and suspect values were removed. This provided monthly gridded fields in the region of interest. Heterogeneous time data coverage was taken into account by computing and removing the annual trend, provided by Diva detrending tool. Heterogeneous correlation length was applied through an advection constraint. Statistical technique DINEOF (Data Interpolation with Empirical Orthogonal Functions, Alvera-Azc

  7. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2015-04-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  8. An empirical model of H2O, CO2 and CO coma distributions and production rates for comet 67P/Churyumov-Gerasimenko based on ROSINA/DFMS measurements and AMPS-DSMC simulations

    NASA Astrophysics Data System (ADS)

    Hansen, Kenneth C.; Altwegg, Kathrin; Bieler, Andre; Berthelier, Jean-Jacques; Calmonte, Ursina; Combi, Michael R.; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, T. I.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; ROSINA Team

    2016-10-01

    We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet water (H2O) coma of comet 67P/Churyumov-Gerasimenko. In this work we create additional empirical models for the coma distributions of CO2 and CO. The AMPS simulations are based on ROSINA DFMS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Double Focusing Mass Spectrometer) data taken over the entire timespan of the Rosetta mission. The empirical model is created using AMPS DSMC results which are extracted from simulations at a range of radial distances, rotation phases and heliocentric distances. The simulation results are then averaged over a comet rotation and fitted to an empirical model distribution. Model coefficients are then fitted to piecewise-linear functions of heliocentric distance. The final product is an empirical model of the coma distribution which is a function of heliocentric distance, radial distance, and sun-fixed longitude and latitude angles. The model clearly mimics the behavior of water shifting production from North to South across the inbound equinox while the CO2 production is always in the South.The empirical model can be used to de-trend the spacecraft motion from the ROSINA COPS and DFMS data. The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on single point measurements. In this presentation we will present the coma production rates as a function of heliocentric distance for the entire Rosetta mission.This work was supported by contracts JPL#1266313 and JPL#1266314 from the US Rosetta Project and NASA grant NNX14AG84G from the Planetary Atmospheres Program.

  9. Theoretical models of parental HIV disclosure: a critical review.

    PubMed

    Qiao, Shan; Li, Xiaoming; Stanton, Bonita

    2013-01-01

    This study critically examined three major theoretical models related to parental HIV disclosure (i.e., the Four-Phase Model [FPM], the Disclosure Decision Making Model [DDMM], and the Disclosure Process Model [DPM]), and the existing studies that could provide empirical support to these models or their components. For each model, we briefly reviewed its theoretical background, described its components and/or mechanisms, and discussed its strengths and limitations. The existing empirical studies supported most theoretical components in these models. However, hypotheses related to the mechanisms proposed in the models have not yet tested due to a lack of empirical evidence. This study also synthesized alternative theoretical perspectives and new issues in disclosure research and clinical practice that may challenge the existing models. The current study underscores the importance of including components related to social and cultural contexts in theoretical frameworks, and calls for more adequately designed empirical studies in order to test and refine existing theories and to develop new ones.

  10. An Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry

    2009-01-01

    In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.

  11. Short Stories via Computers in EFL Classrooms: An Empirical Study for Reading and Writing Skills

    ERIC Educational Resources Information Center

    Yilmaz, Adnan

    2015-01-01

    The present empirical study scrutinizes the use of short stories via computer technologies in teaching and learning English language. The objective of the study is two-fold: to examine how short stories could be used through computer programs in teaching and learning English and to collect data about students' perceptions of this technique via…

  12. Heat Transfer in Adhesively Bonded Honeycomb Core Panels

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran

    2001-01-01

    The Swann and Pittman semi-empirical relationship has been used as a standard in aerospace industry to predict the effective thermal conductivity of honeycomb core panels. Recent measurements of the effective thermal conductivity of an adhesively bonded titanium honeycomb core panel using three different techniques, two steady-state and one transient radiant step heating method, at four laboratories varied significantly from each other and from the Swann and Pittman predictions. Average differences between the measurements and the predictions varied between 17 and 61% in the temperature range of 300 to 500 K. In order to determine the correct values of the effective thermal conductivity and determine which set of the measurements or predictions were most accurate, the combined radiation and conduction heat transfer in the honeycomb core panel was modeled using a finite volume numerical formulation. The transient radiant step heating measurements provided the best agreement with the numerical results. It was found that a modification of the Swann and Pittman semi-empirical relationship which incorporated the facesheets and adhesive layers in the thermal model provided satisfactory results. Finally, a parametric study was conducted to investigate the influence of adhesive thickness and thermal conductivity on the overall heat transfer through the panel.

  13. This Ad is for You: Targeting and the Effect of Alcohol Advertising on Youth Drinking.

    PubMed

    Molloy, Eamon

    2016-02-01

    Endogenous targeting of alcohol advertisements presents a challenge for empirically identifying a causal effect of advertising on drinking. Drinkers prefer a particular media; firms recognize this and target alcohol advertising at these media. This paper overcomes this challenge by utilizing novel data with detailed individual measures of media viewing and alcohol consumption and three separate empirical techniques, which represent significant improvements over previous methods. First, controls for the average audience characteristics of the media an individual views account for attributes of magazines and television programs alcohol firms may consider when deciding where to target advertising. A second specification directly controls for each television program and magazine a person views. The third method exploits variation in advertising exposure due to a 2003 change in an industry-wide rule that governs where firms may advertise. Although the unconditional correlation between advertising and drinking by youth (ages 18-24) is strong, models that include simple controls for targeting imply, at most, a modest advertising effect. Although the coefficients are estimated less precisely, estimates with models including more rigorous controls for targeting indicate no significant effect of advertising on youth drinking. Copyright © 2015 John Wiley & Sons, Ltd.

  14. An empirical study on information spillover effects between the Chinese copper futures market and spot market

    NASA Astrophysics Data System (ADS)

    Liu, Xiangli; Cheng, Siwei; Wang, Shouyang; Hong, Yongmiao; Li, Yi

    2008-02-01

    This study employs a parametric approach based on TGARCH and GARCH models to estimate the VaR of the copper futures market and spot market in China. Considering the short selling mechanism in the futures market, the paper introduces two new notions: upside VaR and extreme upside risk spillover. And downside VaR and upside VaR are examined by using the above approach. Also, we use Kupiec’s [P.H. Kupiec, Techniques for verifying the accuracy of risk measurement models, Journal of Derivatives 3 (1995) 73-84] backtest to test the power of our approaches. In addition, we investigate information spillover effects between the futures market and the spot market by employing a linear Granger causality test, and Granger causality tests in mean, volatility and risk respectively. Moreover, we also investigate the relationship between the futures market and the spot market by using a test based on a kernel function. Empirical results indicate that there exist significant two-way spillovers between the futures market and the spot market, and the spillovers from the futures market to the spot market are much more striking.

  15. Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.

    PubMed

    An, Yan; Zou, Zhihong; Zhao, Yanfei

    2015-03-01

    An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.

  16. Semi-empirical device model for Cu2ZnSn(S,Se)4 solar cells

    NASA Astrophysics Data System (ADS)

    Gokmen, Tayfun; Gunawan, Oki; Mitzi, David B.

    2014-07-01

    We present a device model for the hydrazine processed kesterite Cu2ZnSn(S,Se)4 (CZTSSe) solar cell with a world record efficiency of ˜12.6%. Detailed comparison of the simulation results, performed using wxAMPS software, to the measured device parameters shows that our model captures the vast majority of experimental observations, including VOC, JSC, FF, and efficiency under normal operating conditions, and temperature vs. VOC, sun intensity vs. VOC, and quantum efficiency. Moreover, our model is consistent with material properties derived from various techniques. Interestingly, this model does not have any interface defects/states, suggesting that all the experimentally observed features can be accounted for by the bulk properties of CZTSSe. An electrical (mobility) gap that is smaller than the optical gap is critical to fit the VOC data. These findings point to the importance of tail states in CZTSSe solar cells.

  17. Measuring daily Value-at-Risk of SSEC index: A new approach based on multifractal analysis and extreme value theory

    NASA Astrophysics Data System (ADS)

    Wei, Yu; Chen, Wang; Lin, Yu

    2013-05-01

    Recent studies in the econophysics literature reveal that price variability has fractal and multifractal characteristics not only in developed financial markets, but also in emerging markets. Taking high-frequency intraday quotes of the Shanghai Stock Exchange Component (SSEC) Index as example, this paper proposes a new method to measure daily Value-at-Risk (VaR) by combining the newly introduced multifractal volatility (MFV) model and the extreme value theory (EVT) method. Two VaR backtesting techniques are then employed to compare the performance of the model with that of a group of linear and nonlinear generalized autoregressive conditional heteroskedasticity (GARCH) models. The empirical results show the multifractal nature of price volatility in Chinese stock market. VaR measures based on the multifractal volatility model and EVT method outperform many GARCH-type models at high-risk levels.

  18. Use of observational and model-derived fields and regime model output statistics in mesoscale forecasting

    NASA Technical Reports Server (NTRS)

    Forbes, G. S.; Pielke, R. A.

    1985-01-01

    Various empirical and statistical weather-forecasting studies which utilize stratification by weather regime are described. Objective classification was used to determine weather regime in some studies. In other cases the weather pattern was determined on the basis of a parameter representing the physical and dynamical processes relevant to the anticipated mesoscale phenomena, such as low level moisture convergence and convective precipitation, or the Froude number and the occurrence of cold-air damming. For mesoscale phenomena already in existence, new forecasting techniques were developed. The use of cloud models in operational forecasting is discussed. Models to calculate the spatial scales of forcings and resultant response for mesoscale systems are presented. The use of these models to represent the climatologically most prevalent systems, and to perform case-by-case simulations is reviewed. Operational implementation of mesoscale data into weather forecasts, using both actual simulation output and method-output statistics is discussed.

  19. Comprehensive modeling of a liquid rocket combustion chamber

    NASA Technical Reports Server (NTRS)

    Liang, P.-Y.; Fisher, S.; Chang, Y. M.

    1985-01-01

    An analytical model for the simulation of detailed three-phase combustion flows inside a liquid rocket combustion chamber is presented. The three phases involved are: a multispecies gaseous phase, an incompressible liquid phase, and a particulate droplet phase. The gas and liquid phases are continuum described in an Eulerian fashion. A two-phase solution capability for these continuum media is obtained through a marriage of the Implicit Continuous Eulerian (ICE) technique and the fractional Volume of Fluid (VOF) free surface description method. On the other hand, the particulate phase is given a discrete treatment and described in a Lagrangian fashion. All three phases are hence treated rigorously. Semi-empirical physical models are used to describe all interphase coupling terms as well as the chemistry among gaseous components. Sample calculations using the model are given. The results show promising application to truly comprehensive modeling of complex liquid-fueled engine systems.

  20. Accuracy Assessment of the Precise Point Positioning for Different Troposphere Models

    NASA Astrophysics Data System (ADS)

    Oguz Selbesoglu, Mahmut; Gurturk, Mert; Soycan, Metin

    2016-04-01

    This study investigates the accuracy and repeatability of PPP technique at different latitudes by using different troposphere delay models. Nine IGS stations were selected between 00-800 latitudes at northern hemisphere and southern hemisphere. Coordinates were obtained for 7 days at 1 hour intervals in summer and winter. At first, the coordinates were estimated by using Niell troposphere delay model with and without including north and east gradients in order to investigate the contribution of troposphere delay gradients to the positioning . Secondly, Saastamoinen model was used to eliminate troposphere path delays by using standart atmosphere parameters were extrapolated for all station levels. Finally, coordinates were estimated by using RTCA-MOPS empirical troposphere delay model. Results demonstrate that Niell troposphere delay model with horizontal gradients has better mean values of rms errors 0.09 % and 65 % than the Niell troposphere model without horizontal gradients and RTCA-MOPS model, respectively. Saastamoinen model mean values of rms errors were obtained approximately 4 times bigger than the Niell troposphere delay model with horizontal gradients.

  1. EM Bias-Correction for Ice Thickness and Surface Roughness Retrievals over Rough Deformed Sea Ice

    NASA Astrophysics Data System (ADS)

    Li, L.; Gaiser, P. W.; Allard, R.; Posey, P. G.; Hebert, D. A.; Richter-Menge, J.; Polashenski, C. M.

    2016-12-01

    The very rough ridge sea ice accounts for significant percentage of total ice areas and even larger percentage of total volume. The commonly used Radar altimeter surface detection techniques are empirical in nature and work well only over level/smooth sea ice. Rough sea ice surfaces can modify the return waveforms, resulting in significant Electromagnetic (EM) bias in the estimated surface elevations, and thus large errors in the ice thickness retrievals. To understand and quantify such sea ice surface roughness effects, a combined EM rough surface and volume scattering model was developed to simulate radar returns from the rough sea ice `layer cake' structure. A waveform matching technique was also developed to fit observed waveforms to a physically-based waveform model and subsequently correct the roughness induced EM bias in the estimated freeboard. This new EM Bias Corrected (EMBC) algorithm was able to better retrieve surface elevations and estimate the surface roughness parameter simultaneously. In situ data from multi-instrument airborne and ground campaigns were used to validate the ice thickness and surface roughness retrievals. For the surface roughness retrievals, we applied this EMBC algorithm to co-incident LiDAR/Radar measurements collected during a Cryosat-2 under-flight by the NASA IceBridge missions. Results show that not only does the waveform model fit very well to the measured radar waveform, but also the roughness parameters derived independently from the LiDAR and radar data agree very well for both level and deformed sea ice. For sea ice thickness retrievals, validation based on in-situ data from the coordinated CRREL/NRL field campaign demonstrates that the physically-based EMBC algorithm performs fundamentally better than the empirical algorithm over very rough deformed sea ice, suggesting that sea ice surface roughness effects can be modeled and corrected based solely on the radar return waveforms.

  2. The Relationship between Experience, Education and Teachers' Use of Incidental Focus-on-Form Techniques

    ERIC Educational Resources Information Center

    Mackey, Alison; Polio, Charlene; McDonough, Kim

    2004-01-01

    This paper reports the findings of an empirical study that explored whether ESL teachers' use of incidental focus-on-form techniques was influenced by their level of experience. The results showed that experienced ESL teachers used more incidental focus-on-form techniques than inexperienced teachers. A follow-up study investigated whether…

  3. Modeling and performance analysis using extended fuzzy-timing Petri nets for networked virtual environments.

    PubMed

    Zhou, Y; Murata, T; Defanti, T A

    2000-01-01

    Despite their attractive properties, networked virtual environments (net-VEs) are notoriously difficult to design, implement, and test due to the concurrency, real-time and networking features in these systems. Net-VEs demand high quality-of-service (QoS) requirements on the network to maintain natural and real-time interactions among users. The current practice for net-VE design is basically trial and error, empirical, and totally lacks formal methods. This paper proposes to apply a Petri net formal modeling technique to a net-VE-NICE (narrative immersive constructionist/collaborative environment), predict the net-VE performance based on simulation, and improve the net-VE performance. NICE is essentially a network of collaborative virtual reality systems called the CAVE-(CAVE automatic virtual environment). First, we introduce extended fuzzy-timing Petri net (EFTN) modeling and analysis techniques. Then, we present EFTN models of the CAVE, NICE, and transport layer protocol used in NICE: transmission control protocol (TCP). We show the possibility analysis based on the EFTN model for the CAVE. Then, by using these models and design/CPN as the simulation tool, we conducted various simulations to study real-time behavior, network effects and performance (latencies and jitters) of NICE. Our simulation results are consistent with experimental data.

  4. Distributed Monitoring of the R(sup 2) Statistic for Linear Regression

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Das, Kamalika; Giannella, Chris R.

    2011-01-01

    The problem of monitoring a multivariate linear regression model is relevant in studying the evolving relationship between a set of input variables (features) and one or more dependent target variables. This problem becomes challenging for large scale data in a distributed computing environment when only a subset of instances is available at individual nodes and the local data changes frequently. Data centralization and periodic model recomputation can add high overhead to tasks like anomaly detection in such dynamic settings. Therefore, the goal is to develop techniques for monitoring and updating the model over the union of all nodes data in a communication-efficient fashion. Correctness guarantees on such techniques are also often highly desirable, especially in safety-critical application scenarios. In this paper we develop DReMo a distributed algorithm with very low resource overhead, for monitoring the quality of a regression model in terms of its coefficient of determination (R2 statistic). When the nodes collectively determine that R2 has dropped below a fixed threshold, the linear regression model is recomputed via a network-wide convergecast and the updated model is broadcast back to all nodes. We show empirically, using both synthetic and real data, that our proposed method is highly communication-efficient and scalable, and also provide theoretical guarantees on correctness.

  5. Analysis of multiple tank car releases in train accidents.

    PubMed

    Liu, Xiang; Liu, Chang; Hong, Yili

    2017-10-01

    There are annually over two million carloads of hazardous materials transported by rail in the United States. The American railroads use large blocks of tank cars to transport petroleum crude oil and other flammable liquids from production to consumption sites. Being different from roadway transport of hazardous materials, a train accident can potentially result in the derailment and release of multiple tank cars, which may result in significant consequences. The prior literature predominantly assumes that the occurrence of multiple tank car releases in a train accident is a series of independent Bernoulli processes, and thus uses the binomial distribution to estimate the total number of tank car releases given the number of tank cars derailing or damaged. This paper shows that the traditional binomial model can incorrectly estimate multiple tank car release probability by magnitudes in certain circumstances, thereby significantly affecting railroad safety and risk analysis. To bridge this knowledge gap, this paper proposes a novel, alternative Correlated Binomial (CB) model that accounts for the possible correlations of multiple tank car releases in the same train. We test three distinct correlation structures in the CB model, and find that they all outperform the conventional binomial model based on empirical tank car accident data. The analysis shows that considering tank car release correlations would result in a significantly improved fit of the empirical data than otherwise. Consequently, it is prudent to consider alternative modeling techniques when analyzing the probability of multiple tank car releases in railroad accidents. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Impact of rapid methicillin-resistant Staphylococcus aureus polymerase chain reaction testing on mortality and cost effectiveness in hospitalized patients with bacteraemia: a decision model.

    PubMed

    Brown, Jack; Paladino, Joseph A

    2010-01-01

    Patients hospitalized with Staphylococcus aureus bacteraemia have an unacceptably high mortality rate. Literature available to date has shown that timely selection of the most appropriate antibacterial may reduce mortality. One tool that may help with this selection is a polymerase chain reaction (PCR) assay that distinguishes methicillin (meticillin)-resistant S. aureus (MRSA) from methicillin-susceptible S. aureus (MSSA) in less than 1 hour. To date, no information is available evaluating the impact of this PCR technique on clinical or economic outcomes. To evaluate the effect of a rapid PCR assay on mortality and economics compared with traditional empiric therapy, using a literature-derived model. A literature search for peer-reviewed European (EU) and US publications regarding treatment regimens, outcomes and costs was conducted. Information detailing the rates of infection, as well as the specificity and sensitivity of a rapid PCR assay (Xpert MRSA/SA Blood Culture PCR) were obtained from the peer-reviewed literature. Sensitivity analysis varied the prevalence rate of MRSA from 5% to 80%, while threshold analysis was applied to the cost of the PCR test. Hospital and testing resource consumption were valued with direct medical costs, adjusted to year 2009 values. Adjusted life-years were determined using US and WHO life tables. The cost-effectiveness ratio was defined as the cost per life-year saved. Incremental cost-effectiveness ratios (ICERs) were calculated to determine the additional cost necessary to produce additional effectiveness. All analyses were performed using TreeAge Software (2008). The mean mortality rates were 23% for patients receiving empiric vancomycin subsequently switched to semi-synthetic penicillin (SSP) for MSSA, 36% for patients receiving empiric vancomycin treatment for MRSA, 59% for patients receiving empiric SSP subsequently switched to vancomycin for MRSA and 12% for patients receiving empiric SSP for MSSA. Furthermore, with an MRSA prevalence of 30%, the numbers of patients needed to test in order to save one life were 14 and 16 compared with empiric vancomycin and SSP, respectively. The absolute mortality difference for MRSA prevalence rates of 80% and 5% favoured the PCR testing group at 2% and 10%, respectively, compared with empiric vancomycin and 18% and 1%, respectively, compared with empiric SSP. In the EU, the cost-effectiveness ratios for empiric vancomycin- and SSP-treated patients were Euro 695 and Euro 687 per life-year saved, respectively, compared with Euro 636 per life-year saved for rapid PCR testing. In the US, the cost-effectiveness ratio was $US 898 per life-year saved for empiric vancomycin and $US 820 per life-year saved for rapid PCR testing. ICERs demonstrated dominance of the PCR test in all instances. Threshold analysis revealed that PCR testing would be less costly overall, even at greatly inflated assay prices. Rapid PCR testing for MRSA appears to have the potential to reduce mortality rates while being less costly than empiric therapy in the EU and US, across a wide range of MRSA prevalence rates and PCR test costs.

  7. Deriving simple empirical relationships between aerodynamic and optical aerosol measurements and their application

    USDA-ARS?s Scientific Manuscript database

    Different measurement techniques for aerosol characterization and quantification either directly or indirectly measure different aerosol properties (i.e. count, mass, speciation, etc.). Comparisons and combinations of multiple measurement techniques sampling the same aerosol can provide insight into...

  8. Multiband radar characterization of forest biomes

    NASA Technical Reports Server (NTRS)

    Dobson, M. Craig; Ulaby, Fawwaz T.

    1990-01-01

    The utility of airborne and orbital SAR in classification, assessment, and monitoring of forest biomes is investigated through analysis of orbital synthetic aperature radar (SAR) and multifrequency and multipolarized airborne SAR imagery relying on image tone and texture. Preliminary airborne SAR experiments and truck-mounted scatterometer observations demonstrated that the three dimensional structural complexity of a forest, and the various scales of temporal dynamics in the microwave dielectric properties of both trees and the underlying substrate would severely limit empirical or semi-empirical approaches. As a consequence, it became necessary to develop a more profound understanding of the electromagnetic properties of a forest scene and their temporal dynamics through controlled experimentation coupled with theoretical development and verification. The concatenation of various models into a physically-based composite model treating the entire forest scene became the major objective of the study as this is the key to development of a series of robust retrieval algorithms for forest biophysical properties. In order to verify the performance of the component elements of the composite model, a series of controlled laboratory and field experiments were undertaken to: (1) develop techniques to measure the microwave dielectric properties of vegetation; (2) relate the microwave dielectric properties of vegetation to more readily measured characteristics such as density and moisture content; (3) calculate the radar cross-section of leaves, and cylinders; (4) improve backscatter models for rough surfaces; and (5) relate attenuation and phase delays during propagation through canopies to canopy properties. These modeling efforts, as validated by the measurements, were incorporated within a larger model known as the Michigan Microwave Canopy Scattering (MIMICS) Model.

  9. A refined methodology for modeling volume quantification performance in CT

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Wilson, Joshua; Samei, Ehsan

    2014-03-01

    The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.

  10. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy

    PubMed Central

    Wilson, Lydia J; Newhauser, Wayne D

    2015-01-01

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 minutes. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models. PMID:26040833

  11. A simple and fast physics-based analytical method to calculate therapeutic and stray doses from external beam, megavoltage x-ray therapy.

    PubMed

    Jagetic, Lydia J; Newhauser, Wayne D

    2015-06-21

    State-of-the-art radiotherapy treatment planning systems provide reliable estimates of the therapeutic radiation but are known to underestimate or neglect the stray radiation exposures. Most commonly, stray radiation exposures are reconstructed using empirical formulas or lookup tables. The purpose of this study was to develop the basic physics of a model capable of calculating the total absorbed dose both inside and outside of the therapeutic radiation beam for external beam photon therapy. The model was developed using measurements of total absorbed dose in a water-box phantom from a 6 MV medical linear accelerator to calculate dose profiles in both the in-plane and cross-plane direction for a variety of square field sizes and depths in water. The water-box phantom facilitated development of the basic physical aspects of the model. RMS discrepancies between measured and calculated total absorbed dose values in water were less than 9.3% for all fields studied. Computation times for 10 million dose points within a homogeneous phantom were approximately 4 min. These results suggest that the basic physics of the model are sufficiently simple, fast, and accurate to serve as a foundation for a variety of clinical and research applications, some of which may require that the model be extended or simplified based on the needs of the user. A potentially important advantage of a physics-based approach is that the model is more readily adaptable to a wide variety of treatment units and treatment techniques than with empirical models.

  12. Influence of deep sedimentary basins, crustal thining, attenuation, and topography on regional phases: selected examples from theEastern Mediteranean and the Caspian Sea Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldstein, P.; Schultz, C.; Larsen, S.

    1997-07-15

    Monitoring of a CTBT will require transportable seismic identification techniques, especially in regions where there is limited data. Unfortunately, most existing techniques are empirical and can not be used reliably in new regions. Our goal is to help develop transportable regional identification techniques by improving our ability to predict the behavior of regional phases and discriminants in diverse geologic regions and in regions with little or no data. Our approach is to use numerical modeling to understand the physical basis for regional wave propagation phenomena and to use this understanding to help explain observed behavior of regional phases and discriminants.more » In this paper, we focus on results from simulations of data in selected regions and investigate the sensitivity of these regional simulations to various features of the crustal structure. Our initial models use teleseismically estimated source locations, mechanisms, and durations and seismological structures that have been determined by others. We model the Mb 5.9, October 1992, Cairo Egypt earthquake at a station at Ankara Turkey (ANTO) using a two-dimensional crustal model consisting of a water layer over a deep sedimentary basin with a thinning crust beneath the basin. Despite the complex tectonics of the Eastern Mediterranean region, we find surprisingly good agreement between the observed data and synthetics based on this relatively smooth two-dimensional model.« less

  13. A Comparison of Two Methods for Estimating Black Hole Spin in Active Galactic Nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capellupo, Daniel M.; Haggard, Daryl; Wafflard-Fernandez, Gaylor, E-mail: danielc@physics.mcgill.ca

    Angular momentum, or spin, is a fundamental property of black holes (BHs), yet it is much more difficult to estimate than mass or accretion rate (for actively accreting systems). In recent years, high-quality X-ray observations have allowed for detailed measurements of the Fe K α emission line, where relativistic line broadening allows constraints on the spin parameter (the X-ray reflection method). Another technique uses accretion disk models to fit the AGN continuum emission (the continuum-fitting, or CF, method). Although each technique has model-dependent uncertainties, these are the best empirical tools currently available and should be vetted in systems where bothmore » techniques can be applied. A detailed comparison of the two methods is also useful because neither method can be applied to all AGN. The X-ray reflection technique targets mostly local ( z ≲ 0.1) systems, while the CF method can be applied at higher redshift, up to and beyond the peak of AGN activity and growth. Here, we apply the CF method to two AGN with X-ray reflection measurements. For both the high-mass AGN, H1821+643, and the Seyfert 1, NGC 3783, we find a range in spin parameter consistent with the X-ray reflection measurements. However, the near-maximal spin favored by the reflection method for NGC 3783 is more probable if we add a disk wind to the model. Refinement of these techniques, together with improved X-ray measurements and tighter BH mass constraints, will permit this comparison in a larger sample of AGN and increase our confidence in these spin estimation techniques.« less

  14. Laser fringe anemometry for aero engine components

    NASA Technical Reports Server (NTRS)

    Strazisar, A. J.

    1986-01-01

    Advances in flow measurement techniques in turbomachinery continue to be paced by the need to obtain detailed data for use in validating numerical predictions of the flowfield and for use in the development of empirical models for those flow features which cannot be readily modelled numerically. The use of laser anemometry in turbomachinery research has grown over the last 14 years in response to these needs. Based on past applications and current developments, this paper reviews the key issues which are involved when considering the application of laser anemometry to the measurement of turbomachinery flowfields. Aspects of laser fringe anemometer optical design which are applicable to turbomachinery research are briefly reviewed. Application problems which are common to both laser fringe anemometry (LFA) and laser transit anemometry (LTA) such as seed particle injection, optical access to the flowfield, and measurement of rotor rotational position are covered. The efficiency of various data acquisition schemes is analyzed and issues related to data integrity and error estimation are addressed. Real-time data analysis techniques aimed at capturing flow physics in real time are discussed. Finally, data reduction and analysis techniques are discussed and illustrated using examples taken from several LFA turbomachinery applications.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siranosian, Antranik Antonio; Schembri, Philip Edward; Luscher, Darby Jon

    The Los Alamos National Laboratory's Weapon Systems Engineering division's Advanced Engineering Analysis group employs material constitutive models of composites for use in simulations of components and assemblies of interest. Experimental characterization, modeling and prediction of the macro-scale (i.e. continuum) behaviors of these composite materials is generally difficult because they exhibit nonlinear behaviors on the meso- (e.g. micro-) and macro-scales. Furthermore, it can be difficult to measure and model the mechanical responses of the individual constituents and constituent interactions in the composites of interest. Current efforts to model such composite materials rely on semi-empirical models in which meso-scale properties are inferredmore » from continuum level testing and modeling. The proposed approach involves removing the difficulties of interrogating and characterizing micro-scale behaviors by scaling-up the problem to work with macro-scale composites, with the intention of developing testing and modeling capabilities that will be applicable to the mesoscale. This approach assumes that the physical mechanisms governing the responses of the composites on the meso-scale are reproducible on the macro-scale. Working on the macro-scale simplifies the quantification of composite constituents and constituent interactions so that efforts can be focused on developing material models and the testing techniques needed for calibration and validation. Other benefits to working with macro-scale composites include the ability to engineer and manufacture—potentially using additive manufacturing techniques—composites that will support the application of advanced measurement techniques such as digital volume correlation and three-dimensional computed tomography imaging, which would aid in observing and quantifying complex behaviors that are exhibited in the macro-scale composites of interest. Ultimately, the goal of this new approach is to develop a meso-scale composite modeling framework, applicable to many composite materials, and the corresponding macroscale testing and test data interrogation techniques to support model calibration.« less

  16. Theoretical and Empirical Comparisons between Two Models for Continuous Item Responses.

    ERIC Educational Resources Information Center

    Ferrando, Pere J.

    2002-01-01

    Analyzed the relations between two continuous response models intended for typical response items: the linear congeneric model and Samejima's continuous response model (CRM). Illustrated the relations described using an empirical example and assessed the relations through a simulation study. (SLD)

  17. Enabling devices, empowering people: the design and evaluation of Trackball EdgeWrite.

    PubMed

    Wobbrock, Jacob O; Myers, Brad A

    2008-01-01

    To describe the research and development that led to Trackball EdgeWrite, a gestural text entry method that improves desktop input for some people with motor impairments. To compare the character-level version of this technique with a new word-level version. Further, to compare the technique with competitor techniques that use on-screen keyboards. A rapid and iterative design-and-test approach was used to generate working prototypes and elicit quantitative and qualitative feedback from a veteran trackball user. In addition, theoretical modelling based on the Steering law was used to compare competing designs. One result is a refined software artifact, Trackball EdgeWrite, which represents the outcome of this investigation. A theoretical result shows the speed benefit of word-level stroking compared to character-level stroking, which resulted in a 45.0% improvement. Empirical results of a trackball user with a spinal cord injury indicate a peak performance of 8.25 wpm with the character-level version of Trackball EdgeWrite and 12.09 wpm with the word-level version, a 46.5% improvement. Log file analysis of extended real-world text entry shows stroke savings of 43.9% with the word-level version. Both versions of Trackball EdgeWrite were better than on-screen keyboards, particularly regarding user preferences. Follow-up correspondence shows that the veteran trackball user with a spinal cord injury still uses Trackball EdgeWrite on a daily basis 2 years after his initial exposure to the software. Trackball EdgeWrite is a successful new method for desktop text entry and may have further implications for able-bodied users of mobile technologies. Theoretical modelling is useful in combination with empirical testing to explore design alternatives. Single-user lab and field studies can be useful for driving a rapid iterative cycle of innovation and development.

  18. Inter-comparison of interpolated background nitrogen dioxide concentrations across Greater Manchester, UK

    NASA Astrophysics Data System (ADS)

    Lindley, S. J.; Walsh, T.

    There are many modelling methods dedicated to the estimation of spatial patterns in pollutant concentrations, each with their distinctive advantages and disadvantages. The derivation of a surface of air quality values from monitoring data alone requires the conversion of point-based data from a limited number of monitoring stations to a continuous surface using interpolation. Since interpolation techniques involve the estimation of data at un-sampled points based on calculated relationships between data measured at a number of known sample points, they are subject to some uncertainty, both in terms of the values estimated and their spatial distribution. These uncertainties, which are incorporated into many empirical and semi-empirical mapping methodologies, could be recognised in any further usage of the data and also in the assessment of the extent of an exceedence of an air quality standard and the degree of exposure this may represent. There is a wide range of available interpolation techniques and the differences in the characteristics of these result in variations in the output surfaces estimated from the same set of input points. The work presented in this paper provides an examination of uncertainties through the application of a number of interpolation techniques available in standard GIS packages to a case study nitrogen dioxide data set for the Greater Manchester conurbation in northern England. The implications of the use of different techniques are discussed through application to hourly concentrations during an air quality episode and annual average concentrations in 2001. Patterns of concentrations demonstrate considerable differences in the estimated spatial pattern of maxima as the combined effects of chemical processes, topography and meteorology. In the case of air quality episodes, the considerable spatial variability of concentrations results in large uncertainties in the surfaces produced but these uncertainties vary widely from area to area. In view of the uncertainties with classical techniques research is ongoing to develop alternative methods which should in time help improve the suite of tools available to air quality managers.

  19. An empirical and model study on automobile market in Taiwan

    NASA Astrophysics Data System (ADS)

    Tang, Ji-Ying; Qiu, Rong; Zhou, Yueping; He, Da-Ren

    2006-03-01

    We have done an empirical investigation on automobile market in Taiwan including the development of the possession rate of the companies in the market from 1979 to 2003, the development of the largest possession rate, and so on. A dynamic model for describing the competition between the companies is suggested based on the empirical study. In the model each company is given a long-term competition factor (such as technology, capital and scale) and a short-term competition factor (such as management, service and advertisement). Then the companies play games in order to obtain more possession rate in the market under certain rules. Numerical simulation based on the model display a competition developing process, which qualitatively and quantitatively agree with our empirical investigation results.

  20. Zipf's law from scale-free geometry.

    PubMed

    Lin, Henry W; Loeb, Abraham

    2016-03-01

    The spatial distribution of people exhibits clustering across a wide range of scales, from household (∼10(-2) km) to continental (∼10(4) km) scales. Empirical data indicate simple power-law scalings for the size distribution of cities (known as Zipf's law) and the population density fluctuations as a function of scale. Using techniques from random field theory and statistical physics, we show that these power laws are fundamentally a consequence of the scale-free spatial clustering of human populations and the fact that humans inhabit a two-dimensional surface. In this sense, the symmetries of scale invariance in two spatial dimensions are intimately connected to urban sociology. We test our theory by empirically measuring the power spectrum of population density fluctuations and show that the logarithmic slope α=2.04 ± 0.09, in excellent agreement with our theoretical prediction α=2. The model enables the analytic computation of many new predictions by importing the mathematical formalism of random fields.

  1. Clearance of the cervical spine in clinically unevaluable trauma patients.

    PubMed

    Halpern, Casey H; Milby, Andrew H; Guo, Wensheng; Schuster, James M; Gracias, Vicente H; Stein, Sherman C

    2010-08-15

    Meta-analytic costeffectiveness analysis. Our goal was to compare the results of different management strategies for trauma patients in whom the cervical spine was not clinically evaluable due to impaired consciousness, endotracheal intubation, or painful distracting injuries. We performed a structured literature review related to cervical spine trauma, radiographic clearance techniques (plain radiography, flexion/extension, CT, and MRI), and complications associated with semirigid collar use. Meta-analytic techniques were used to pool data from multiple sources to calculate pooled mean estimates of sensitivities and specificities of imaging techniques for cervical spinal clearance, rates of complications from various clearance strategies and from empirical use of semirigid collars. A decision analysis model was used to compare outcomes and costs among these strategies. Slightly more than 7.5% of patients who are clinically unevaluable have cervical spine injuries, and 42% of these injuries are associated with spinal instability. Sensitivity of plain radiography or fluoroscopy for spinal clearance was 57% (95% CI: 57%-60%). Sensitivities for CT and MRI alone were 83% (82%-84%) and 87% (84%-89%), respectively. Complications associated with collar use ranged from 1.3% (2 days) to 7.1% (10 days) but were usually minor and short-lived. Quadriplegia resulting from spinal instability missed by a clearance test had enormous impacts on longevity, quality of life, and costs. These impacts overshadowed the effects of prolonged collar application, even when the incidence of quadriplegia was extremely low. As currently used, neuroimaging studies for cervical spinal clearance in clinically unevaluable patients are not cost-effective compared with empirical immobilization in a semirigid collar.

  2. Calculation of lava discharge rates during effusive eruptions: an empirical approach using MODIS Middle InfraRed data

    NASA Astrophysics Data System (ADS)

    Coppola, Diego; Laiolo, Marco; Cigolini, Corrado

    2016-04-01

    The rate at which the lava is erupted is a crucial parameter to be monitored during any volcanic eruption. However, its accurate and systematic measurement, throughout the whole duration of an event, remains a big challenge, also for volcanologists working on highly studied and well monitored volcanoes. The thermal approach (also known as thermal proxy) is actually one of most promising techniques adopted during effusive eruptions, since it allows to estimate Time Averaged lava Discharge Rates (TADR) from remote-sensed infrared data acquired several time per day. However, due to the complexity of the physic behind the effusive phenomenon and the difficulty to have field validations, the application of the thermal proxy is still debated and limited to few volcanoes only. Here we present the analysis of MODIS Middle InfraRed data, collected by during several distinct eruptions, in order to show how an alternative, empirical method (called radiant density approach; Coppola et al., 2013) permit to estimate TADRs over a wide range of emplacement styles and lava compositions. We suggest that the simplicity of this empirical approach allows its rapid application during eruptive crisis, and provides the basis for more complex models based on the cooling and spreading processes of the active lava bodies.

  3. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.

  4. A chaotic model for the plague epidemic that has occurred in Bombay at the end of the 19th century

    NASA Astrophysics Data System (ADS)

    Mangiarotti, Sylvain

    2015-04-01

    The plague epidemic that has occurred in Bombay at the end of the 19th century was detected in 1896. One year before, an Advisory Committee had been appointed by the Secretary of State for India, the Royal Society, and the Lister Institute. This Committee made numerous investigations and gathered a large panel of data including the number of people attacked and died from the plague, records of rat and flea populations, as well as meteorological records of temperature and humidity [1]. The global modeling technique [2] aims to obtain low dimensional models able to simulate the observed cycles from time series. As far as we know, this technique has been tried only to one case of epidemiological analysis (the whooping cough infection) based on a discrete formulation [3]. In the present work, the continuous time formulation of this technique is used to analyze the time evolution of the plague epidemic from this data set. One low dimensional model (three variables) is obtained exhibiting a limit cycle of period-5. A chaotic behavior could be derived from this model by tuning the model parameters. It provides a strong argument for a dynamical behavior that can be approximated by low dimensional deterministic equations. This model also provides an empirical argument for chaos in epidemics. [1] Verjbitski D. T., Bannerman W. B. & Kápadiâ R. T., 1908. Reports on Plague Investigations in India (May,1908), The Journal of Hygiene, 8(2), 161 -308. [2] Mangiarotti S., Coudret R., Drapeau L. & Jarlan L., 2012. Polynomial search and Global modelling: two algorithms for modeling chaos. Physical Review E, 86(4), 046205. [3] Boudjema G. & Cazelles B., 2003. Extraction of nonlinear dynamics from short and noisy time series. Chaos, Solitons and Fractals, 12, 2051-2069.

  5. Combining Empirical and Stochastic Models for Extreme Floods Estimation

    NASA Astrophysics Data System (ADS)

    Zemzami, M.; Benaabidate, L.

    2013-12-01

    Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.

  6. Modelling soil erosion at European scale: towards harmonization and reproducibility

    NASA Astrophysics Data System (ADS)

    Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.

    2015-02-01

    Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water-holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale, because a systematic knowledge of local climatological and soil parameters is often unavailable. A new approach for modelling soil erosion at regional scale is here proposed. It is based on the joint use of low-data-demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available data sets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country-level statistics of pre-existing European soil erosion maps is also provided.

  7. The Efficacy of Consulting Practicum in Enhancing Students' Readiness for Professional Career in Management Information Systems: An Empirical Analysis

    ERIC Educational Resources Information Center

    Akpan, Ikpe Justice

    2016-01-01

    Consulting practicum (CP) is a form of experiential learning technique to prepare students for professional careers. While CP has become a popular way to help students acquire the essential practical skills and experience to enhance career readiness and ensure a smooth transition from college to employment, there is a lack of empirical studies…

  8. Regional Morphology Analysis Package (RMAP): Empirical Orthogonal Function Analysis, Background and Examples

    DTIC Science & Technology

    2007-10-01

    1984. Complex principal component analysis : Theory and examples. Journal of Climate and Applied Meteorology 23: 1660-1673. Hotelling, H. 1933...Sediments 99. ASCE: 2,566-2,581. Von Storch, H., and A. Navarra. 1995. Analysis of climate variability. Applications of statistical techniques. Berlin...ERDC TN-SWWRP-07-9 October 2007 Regional Morphology Empirical Analysis Package (RMAP): Orthogonal Function Analysis , Background and Examples by

  9. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    PubMed

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  10. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    PubMed Central

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  11. The Study of Biogenetic Organic Compound Emissions and Ozone in a Subtropical Bamboo Forest

    NASA Astrophysics Data System (ADS)

    Bai, Jianhui; Guenther, Alex; Turnipseed, Andrew; Duhl, Tiffany; Duhl, Nanhao; van der A, Ronald; Yu, Shuquan; Wang, Bin

    2016-08-01

    Emissions of Biogenic Volatile Organic compounds (BVOCs), Photosynthetically Active Radiation (PAR), and meteorological parameters were measured in some ecosystems in China. A Relaxed Eddy Accumulation system and an enclosure technique were used to measure BVOC emissions. Obvious diurnal and seasonal variations of BVOC emissions were found. Empirical models of BVOC emissions were developed, the estimated BVOC emissions were in agreement with observations. BVOC emissions in growing seasons in the Inner Mongolia grassland, Chnagbai Mountain temperate forest, LinAn subtropical bamboo forest were estimated. The emission factors of these ecosystems were calculated.

  12. Optical absorption spectra and g factor of MgO: Mn2+explored by ab initio and semi empirical methods

    NASA Astrophysics Data System (ADS)

    Andreici Eftimie, E.-L.; Avram, C. N.; Brik, M. G.; Avram, N. M.

    2018-02-01

    In this paper we present a methodology for calculations of the optical absorption spectra, ligand field parameters and g factor for the Mn2+ (3d5) ions doped in MgO host crystal. The proposed technique combines two methods: the ab initio multireference (MR) and the semi empirical ligand field (LF) in the framework of the exchange charge model (ECM) respectively. Both methods of calculations are applied to the [MnO6]10-cluster embedded in an extended point charge field of host matrix ligands based on Gellé-Lepetit procedure. The first step of such investigations was the full optimization of the cubic structure of perfect MgO crystal, followed by the structural optimization of the doped of MgO:Mn2+ system, using periodic density functional theory (DFT). The ab initio MR wave functions approaches, such as complete active space self-consistent field (CASSCF), N-electron valence second order perturbation theory (NEVPT2) and spectroscopy oriented configuration interaction (SORCI), are used for the calculations. The scalar relativistic effects have also been taken into account through the second order Douglas-Kroll-Hess (DKH2) procedure. Ab initio ligand field theory (AILFT) allows to extract all LF parameters and spin-orbit coupling constant from such calculations. In addition, the ECM of ligand field theory (LFT) has been used for modelling theoptical absorption spectra. The perturbation theory (PT) was employed for the g factor calculation in the semi empirical LFT. The results of each of the aforementioned types of calculations are discussed and the comparisons between the results obtained and the experimental results show a reasonable agreement, which justifies this new methodology based on the simultaneous use of both methods. This study establishes fundamental principles for the further modelling of larger embedded cluster models of doped metal oxides.

  13. Empirical Tests of the Assumptions Underlying Models for Foreign Exchange Rates.

    DTIC Science & Technology

    1984-03-01

    Research Report COs 481 EMPIRICAL TESTS OF THE ASSUMPTIO:IS UNDERLYING MODELS FOR FOREIGN EXCHANGE RATES by P. Brockett B. Golany 00 00 CENTER FOR...Research Report CCS 481 EMPIRICAL TESTS OF THE ASSUMPTIONS UNDERLYING MODELS FOR FOREIGN EXCHANGE RATES by P. Brockett B. Golany March 1984...applying these tests to the U.S. dollar to Japanese Yen foreign exchange rates . Conclusions and discussion is given in section VI. 1The previous authors

  14. Micromechanics Modeling of Fracture in Nanocrystalline Metals

    NASA Technical Reports Server (NTRS)

    Glaessgen, E. H.; Piascik, R. S.; Raju, I. S.; Harris, C. E.

    2002-01-01

    Nanocrystalline metals have very high theoretical strength, but suffer from a lack of ductility and toughness. Therefore, it is critical to understand the mechanisms of deformation and fracture of these materials before their full potential can be achieved. Because classical fracture mechanics is based on the comparison of computed fracture parameters, such as stress intlmsity factors, to their empirically determined critical values, it does not adequately describe the fundamental physics of fracture required to predict the behavior of nanocrystalline metals. Thus, micromechanics-based techniques must be considered to quanti@ the physical processes of deformation and fracture within nanocrystalline metals. This paper discusses hndamental physicsbased modeling strategies that may be useful for the prediction Iof deformation, crack formation and crack growth within nanocrystalline metals.

  15. Neutron scattering cross section measurements for Fe 56

    DOE PAGES

    Ramirez, A. P. D.; Vanhoy, J. R.; Hicks, S. F.; ...

    2017-06-09

    Elastic and inelastic differential cross sections for neutron scattering from 56Fe have been measured for several incident energies from 1.30 to 7.96 MeV at the University of Kentucky Accelerator Laboratory. Scattered neutrons were detected using a C 6D 6 liquid scintillation detector using pulse-shape discrimination and time-of-flight techniques. The deduced cross sections have been compared with previously reported data, predictions from evaluation databases ENDF, JENDL, and JEFF, and theoretical calculations performed using different optical model potentials using the TALYS and EMPIRE nuclear reaction codes. The coupled-channel calculations based on the vibrational and soft-rotor models are found to describe the experimentalmore » (n,n 0) and (n,n 1) cross sections well.« less

  16. Advances in analytical chemistry

    NASA Technical Reports Server (NTRS)

    Arendale, W. F.; Congo, Richard T.; Nielsen, Bruce J.

    1991-01-01

    Implementation of computer programs based on multivariate statistical algorithms makes possible obtaining reliable information from long data vectors that contain large amounts of extraneous information, for example, noise and/or analytes that we do not wish to control. Three examples are described. Each of these applications requires the use of techniques characteristic of modern analytical chemistry. The first example, using a quantitative or analytical model, describes the determination of the acid dissociation constant for 2,2'-pyridyl thiophene using archived data. The second example describes an investigation to determine the active biocidal species of iodine in aqueous solutions. The third example is taken from a research program directed toward advanced fiber-optic chemical sensors. The second and third examples require heuristic or empirical models.

  17. Physico-Chemical Dynamics of Nanoparticle Formation during Laser Decontamination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, M.D.

    2005-06-01

    Laser-ablation based decontamination is a new and effective approach for simultaneous removal and characterization of contaminants from surfaces (e.g., building interior and exterior walls, ground floors, etc.). The scientific objectives of this research are to: (1) characterize particulate matter generated during the laser-ablation based decontamination, (2) develop a technique for simultaneous cleaning and spectroscopic verification, and (3) develop an empirical model for predicting particle generation for the size range from 10 nm to tens of micrometers. This research project provides fundamental data obtained through a systematic study on the particle generation mechanism, and also provides a working model for predictionmore » of particle generation such that an effective operational strategy can be devised to facilitate worker protection.« less

  18. Neutron scattering cross section measurements for 56Fe

    NASA Astrophysics Data System (ADS)

    Ramirez, A. P. D.; Vanhoy, J. R.; Hicks, S. F.; McEllistrem, M. T.; Peters, E. E.; Mukhopadhyay, S.; Harrison, T. D.; Howard, T. J.; Jackson, D. T.; Lenzen, P. D.; Nguyen, T. D.; Pecha, R. L.; Rice, B. G.; Thompson, B. K.; Yates, S. W.

    2017-06-01

    Elastic and inelastic differential cross sections for neutron scattering from 56Fe have been measured for several incident energies from 1.30 to 7.96 MeV at the University of Kentucky Accelerator Laboratory. Scattered neutrons were detected using a C6D6 liquid scintillation detector using pulse-shape discrimination and time-of-flight techniques. The deduced cross sections have been compared with previously reported data, predictions from evaluation databases ENDF, JENDL, and JEFF, and theoretical calculations performed using different optical model potentials using the talys and empire nuclear reaction codes. The coupled-channel calculations based on the vibrational and soft-rotor models are found to describe the experimental (n ,n0 ) and (n ,n1 ) cross sections well.

  19. Correction techniques for depth errors with stereo three-dimensional graphic displays

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Holden, Anthony; Williams, Steven P.

    1992-01-01

    Three-dimensional (3-D), 'real-world' pictorial displays that incorporate 'true' depth cues via stereopsis techniques have proved effective for displaying complex information in a natural way to enhance situational awareness and to improve pilot/vehicle performance. In such displays, the display designer must map the depths in the real world to the depths available with the stereo display system. However, empirical data have shown that the human subject does not perceive the information at exactly the depth at which it is mathematically placed. Head movements can also seriously distort the depth information that is embedded in stereo 3-D displays because the transformations used in mapping the visual scene to the depth-viewing volume (DVV) depend intrinsically on the viewer location. The goal of this research was to provide two correction techniques; the first technique corrects the original visual scene to the DVV mapping based on human perception errors, and the second (which is based on head-positioning sensor input data) corrects for errors induced by head movements. Empirical data are presented to validate both correction techniques. A combination of the two correction techniques effectively eliminates the distortions of depth information embedded in stereo 3-D displays.

  20. Evaluation of Conceptual Frameworks Applicable to the Study of Isolation Precautions Effectiveness

    PubMed Central

    Crawford, Catherine; Shang, Jingjing

    2015-01-01

    Aims A discussion of conceptual frameworks applicable to the study of isolation precautions effectiveness according to Fawcett and DeSanto-Madeya’s (2013) evaluation technique and their relative merits and drawbacks for this purpose Background Isolation precautions are recommended to control infectious diseases with high morbidity and mortality, but effectiveness is not established due to numerous methodological challenges. These challenges, such as identifying empirical indicators and refining operational definitions, could be alleviated though use of an appropriate conceptual framework. Design Discussion paper Data Sources In mid-April 2014, the primary author searched five electronic, scientific literature databases for conceptual frameworks applicable to study isolation precautions, without limiting searches by publication date. Implications for Nursing By reviewing promising conceptual frameworks to support isolation precautions effectiveness research, this paper exemplifies the process to choose an appropriate conceptual framework for empirical research. Hence, researchers may build on these analyses to improve study design of empirical research in multiple disciplines, which may lead to improved research and practice. Conclusion Three frameworks were reviewed: the epidemiologic triad of disease, Donabedian’s healthcare quality framework and the Quality Health Outcomes model. Each has been used in nursing research to evaluate health outcomes and contains concepts relevant to nursing domains. Which framework can be most useful likely depends on whether the study question necessitates testing multiple interventions, concerns pathogen-specific characteristics and yields cross-sectional or longitudinal data. The Quality Health Outcomes model may be slightly preferred as it assumes reciprocal relationships, multi-level analysis and is sensitive to cultural inputs. PMID:26179813

  1. Using remote sensing and GIS techniques to estimate discharge and recharge. fluxes for the Death Valley regional groundwater flow system, USA

    USGS Publications Warehouse

    D'Agnese, F. A.; Faunt, C.C.; Keith, Turner A.

    1996-01-01

    The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.

  2. A Factor Analytic Model of Drug-Related Behavior in Adolescence and Its Impact on Arrests at Multiple Stages of the Life Course

    PubMed Central

    2016-01-01

    Objectives Recognizing the inherent variability of drug-related behaviors, this study develops an empirically-driven and holistic model of drug-related behavior during adolescence using factor analysis to simultaneously model multiple drug behaviors. Methods The factor analytic model uncovers latent dimensions of drug-related behaviors, rather than patterns of individuals. These latent dimensions are treated as empirical typologies which are then used to predict an individual’s number of arrests accrued at multiple phases of the life course. The data are robust enough to simultaneously capture drug behavior measures typically considered in isolation in the literature, and to allow for behavior to change and evolve over the period of adolescence. Results Results show that factor analysis is capable of developing highly descriptive patterns of drug offending, and that these patterns have great utility in predicting arrests. Results further demonstrate that while drug behavior patterns are predictive of arrests at the end of adolescence for both males and females, the impacts on arrests are longer lasting for females. Conclusions The various facets of drug behaviors have been a long-time concern of criminological research. However, the ability to model multiple behaviors simultaneously is often constrained by data that do not measure the constructs fully. Factor analysis is shown to be a useful technique for modeling adolescent drug involvement patterns in a way that accounts for the multitude and variability of possible behaviors, and in predicting future negative life outcomes, such as arrests. PMID:28435183

  3. Microwave measurement and modeling of the dielectric properties of vegetation

    NASA Astrophysics Data System (ADS)

    Shrestha, Bijay Lal

    Some of the important applications of microwaves in the industrial, scientific and medical sectors include processing and treatment of various materials, and determining their physical properties. The dielectric properties of the materials of interest are paramount irrespective of the applications, hence, a wide range of materials covering food products, building materials, ores and fuels, and biological materials have been investigated for their dielectric properties. However, very few studies have been conducted towards the measurement of dielectric properties of green vegetations, including commercially important plant crops such as alfalfa. Because of its high nutritional value, there is a huge demand for this plant and its processed products in national and international markets, and an investigation into the possibility of applying microwaves to improve both the net yield and quality of the crop can be beneficial. Therefore, a dielectric measurement system based upon the probe reflection technique has been set up to measure dielectric properties of green plants over a frequency range from 300 MHz to 18 GHz, moisture contents from 12%, wet basis to 79%, wet basis, and temperatures from -15°C to 30°C. Dielectric properties of chopped alfalfa were measured with this system over frequency range of 300 MHz to 18 GHz, moisture content from 11.5%, wet basis, to 73%, wet basis, and density over the range from 139 kg m-3 to 716 kg m-3 at 23°C. The system accuracy was found to be +/-6% and +/-10% in measuring the dielectric constant and loss factor respectively. Empirical, semi empirical and theoretical models that require only moisture content and operating frequency were determined to represent the dielectric properties of both leaves and stems of alfalfa at 22°C. The empirical models fitted the measured dielectric data extremely well. The root mean square error (RMSE) and the coefficient of determination (r2) for dielectric constant and loss factor of leaves were 0.89 and 0.99, and 0.52 and 0.99 respectively. The RMSE and r2 values for dielectric constant and loss factor of stems were 0.89 and 0.99, and 0.77 and 0.99 respectively. Among semi empirical or theoretical models, Power law model showed better performance (RMSE = 1.78, r2 = 0.96) in modeling dielectric constant of leaves, and Debye-ColeCole model was more appropriate (RMSE = 1.23, r2 = 0.95) for the loss factor. For stems, the Debye-ColeCole models (developed on an assumption that they do not shrink as they dry) were found to be the best models to calculate the dielectric constant with RMSE 0.53 and r2 = 0.99, and dielectric loss factor with RMSE = 065 and r2 = 0.95. (Abstract shortened by UMI.)

  4. Predictive modeling of hazardous waste landfill total above-ground biomass using passive optical and LIDAR remotely sensed data

    NASA Astrophysics Data System (ADS)

    Hadley, Brian Christopher

    This dissertation assessed remotely sensed data and geospatial modeling technique(s) to map the spatial distribution of total above-ground biomass present on the surface of the Savannah River National Laboratory's (SRNL) Mixed Waste Management Facility (MWMF) hazardous waste landfill. Ordinary least squares (OLS) regression, regression kriging, and tree-structured regression were employed to model the empirical relationship between in-situ measured Bahia (Paspalum notatum Flugge) and Centipede [Eremochloa ophiuroides (Munro) Hack.] grass biomass against an assortment of explanatory variables extracted from fine spatial resolution passive optical and LIDAR remotely sensed data. Explanatory variables included: (1) discrete channels of visible, near-infrared (NIR), and short-wave infrared (SWIR) reflectance, (2) spectral vegetation indices (SVI), (3) spectral mixture analysis (SMA) modeled fractions, (4) narrow-band derivative-based vegetation indices, and (5) LIDAR derived topographic variables (i.e. elevation, slope, and aspect). Results showed that a linear combination of the first- (1DZ_DGVI), second- (2DZ_DGVI), and third-derivative of green vegetation indices (3DZ_DGVI) calculated from hyperspectral data recorded over the 400--960 nm wavelengths of the electromagnetic spectrum explained the largest percentage of statistical variation (R2 = 0.5184) in the total above-ground biomass measurements. In general, the topographic variables did not correlate well with the MWMF biomass data, accounting for less than five percent of the statistical variation. It was concluded that tree-structured regression represented the optimum geospatial modeling technique due to a combination of model performance and efficiency/flexibility factors.

  5. Full two-dimensional transient solutions of electrothermal aircraft blade deicing

    NASA Technical Reports Server (NTRS)

    Masiulaniec, K. C.; Keith, T. G., Jr.; Dewitt, K. J.; Leffel, K. L.

    1985-01-01

    Two finite difference methods are presented for the analysis of transient, two-dimensional responses of an electrothermal de-icer pad of an aircraft wing or blade with attached variable ice layer thickness. Both models employ a Crank-Nicholson iterative scheme, and use an enthalpy formulation to handle the phase change in the ice layer. The first technique makes use of a 'staircase' approach, fitting the irregular ice boundary with square computational cells. The second technique uses a body fitted coordinate transform, and maps the exact shape of the irregular boundary into a rectangular body, with uniformally square computational cells. The numerical solution takes place in the transformed plane. Initial results accounting for variable ice layer thickness are presented. Details of planned de-icing tests at NASA-Lewis, which will provide empirical verification for the above two methods, are also presented.

  6. λ-Repressor Oligomerization Kinetics at High Concentrations Using Fluorescence Correlation Spectroscopy in Zero-Mode Waveguides

    PubMed Central

    Samiee, K. T.; Foquet, M.; Guo, L.; Cox, E. C.; Craighead, H. G.

    2005-01-01

    Fluorescence correlation spectroscopy (FCS) has demonstrated its utility for measuring transport properties and kinetics at low fluorophore concentrations. In this article, we demonstrate that simple optical nanostructures, known as zero-mode waveguides, can be used to significantly reduce the FCS observation volume. This, in turn, allows FCS to be applied to solutions with significantly higher fluorophore concentrations. We derive an empirical FCS model accounting for one-dimensional diffusion in a finite tube with a simple exponential observation profile. This technique is used to measure the oligomerization of the bacteriophage λ repressor protein at micromolar concentrations. The results agree with previous studies utilizing conventional techniques. Additionally, we demonstrate that the zero-mode waveguides can be used to assay biological activity by measuring changes in diffusion constant as a result of ligand binding. PMID:15613638

  7. Comparison of analysis and flight test data for a drone aircraft with active flutter suppression

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Pototzky, A. S.

    1981-01-01

    This paper presents a comparison of analysis and flight test data for a drone aircraft equipped with an active flutter suppression system. Emphasis is placed on the comparison of modal dampings and frequencies as a function of Mach number. Results are presented for both symmetric and antisymmetric motion with flutter suppression off. Only symmetric results are presented for flutter suppression on. Frequency response functions of the vehicle are presented from both flight test data and analysis. The analysis correlation is improved by using an empirical aerodynamic correction factor which is proportional to the ratio of experimental to analytical steady-state lift curve slope. In addition to presenting the mathematical models and a brief description of existing analytical techniques, an alternative analytical technique for obtaining closed-loop results is presented.

  8. Aerodynamic heating rate distributions induced by trailing edge controls on hypersonic aircraft configurations at Mach 8

    NASA Technical Reports Server (NTRS)

    Kaufman, L. G., II; Johnson, C. B.

    1984-01-01

    Aerodynamic surface heating rate distributions in three dimensional shock wave boundary layer interaction flow regions are presented for a generic set of model configurations representative of the aft portion of hypersonic aircraft. Heat transfer data were obtained using the phase change coating technique (paint) and, at particular spanwise and streamwise stations for sample cases, by the thin wall transient temperature technique (thermocouples). Surface oil flow patterns are also shown. The good accuracy of the detailed heat transfer data, as attested in part by their repeatability, is attributable partially to the comparatively high temperature potential of the NASA-Langley Mach 8 Variable Density Tunnel. The data are well suited to help guide heating analyses of Mach 8 aircraft, and should be considered in formulating improvements to empiric analytic methods for calculating heat transfer rate coefficient distributions.

  9. Prediction of light aircraft interior noise

    NASA Technical Reports Server (NTRS)

    Howlett, J. T.; Morales, D. A.

    1976-01-01

    At the present time, predictions of aircraft interior noise depend heavily on empirical correction factors derived from previous flight measurements. However, to design for acceptable interior noise levels and to optimize acoustic treatments, analytical techniques which do not depend on empirical data are needed. This paper describes a computerized interior noise prediction method for light aircraft. An existing analytical program (developed for commercial jets by Cockburn and Jolly in 1968) forms the basis of some modal analysis work which is described. The accuracy of this modal analysis technique for predicting low-frequency coupled acoustic-structural natural frequencies is discussed along with trends indicating the effects of varying parameters such as fuselage length and diameter, structural stiffness, and interior acoustic absorption.

  10. An Empire's Extract: Chemical Manipulations of Cinchona Bark in the Eighteenth-Century Spanish Atlantic World.

    PubMed

    Crawford, Matthew James

    2014-01-01

    In 1790, the Spanish Crown sent a "botanist-chemist" to South America to implement production of a chemical extract made from cinchona bark, a botanical medicament from the Andes used throughout the Atlantic World to treat malarial fevers. Even though the botanist-chemist's efforts to produce the extract failed, this episode offers important insight into the role of chemistry in the early modern Atlantic World. Well before the Spanish Crown tried to make it a tool of empire, chemistry provided a vital set of techniques that circulated among a variety of healers, who used such techniques to make botanical medicaments useful and intelligible in new ways.

  11. Selection of fire spread model for Russian fire behavior prediction system

    Treesearch

    Alexandra V. Volokitina; Kevin C. Ryan; Tatiana M. Sofronova; Mark A. Sofronov

    2010-01-01

    Mathematical modeling of fire behavior prediction is only possible if the models are supplied with an information database that provides spatially explicit input parameters for modeled area. Mathematical models can be of three kinds: 1) physical; 2) empirical; and 3) quasi-empirical (Sullivan, 2009). Physical models (Grishin, 1992) are of academic interest only because...

  12. Lead Slowing-Down Spectrometry for Spent Fuel Assay: FY11 Status Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Glen A.; Casella, Andrew M.; Haight, R. C.

    2011-08-01

    Executive Summary Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertainty considerably lower than themore » approximately 10% typical of today’s confirmatory assay methods. This document is a progress report for FY2011 collaboration activities. Progress made by the collaboration in FY2011 continues to indicate the promise of LSDS techniques applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model demonstrated the potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space. Similar results were obtained using a perturbation approach developed by LANL. Benchmark measurements have been successfully conducted at LANL and at RPI using their respective LSDS instruments. The ISU and UNLV collaborative effort is focused on the fabrication and testing of prototype fission chambers lined with ultra-depleted 238U and 232Th, and uranium deposition on a stainless steel disc using spiked U3O8 from room temperature ionic liquid was successful, with improving thickness obtained. In FY2012, the collaboration plans a broad array of activities. PNNL will focus on optimizing its empirical model and minimizing its reliance on calibration data, as well continuing efforts on developing an analytical model. Additional measurements are planned at LANL and RPI. LANL measurements will include a Pu sample, which is expected to provide more counts at longer slowing-down times to help identify discrepancies between experimental data and MCNPX simulations. RPI measurements will include the assay of an entire fresh fuel assembly for the study of self-shielding effects as well as the ability to detect diversion by detecting a missing fuel pin in the fuel assembly. The development of threshold neutron sensors will continue, and UNLV will calibrate existing ultra-depleted uranium deposits at ISU.« less

  13. A DG approach to the numerical solution of the Stein-Stein stochastic volatility option pricing model

    NASA Astrophysics Data System (ADS)

    Hozman, J.; Tichý, T.

    2017-12-01

    Stochastic volatility models enable to capture the real world features of the options better than the classical Black-Scholes treatment. Here we focus on pricing of European-style options under the Stein-Stein stochastic volatility model when the option value depends on the time, on the price of the underlying asset and on the volatility as a function of a mean reverting Orstein-Uhlenbeck process. A standard mathematical approach to this model leads to the non-stationary second-order degenerate partial differential equation of two spatial variables completed by the system of boundary and terminal conditions. In order to improve the numerical valuation process for a such pricing equation, we propose a numerical technique based on the discontinuous Galerkin method and the Crank-Nicolson scheme. Finally, reference numerical experiments on real market data illustrate comprehensive empirical findings on options with stochastic volatility.

  14. Multi-scaling modelling in financial markets

    NASA Astrophysics Data System (ADS)

    Liu, Ruipeng; Aste, Tomaso; Di Matteo, T.

    2007-12-01

    In the recent years, a new wave of interest spurred the involvement of complexity in finance which might provide a guideline to understand the mechanism of financial markets, and researchers with different backgrounds have made increasing contributions introducing new techniques and methodologies. In this paper, Markov-switching multifractal models (MSM) are briefly reviewed and the multi-scaling properties of different financial data are analyzed by computing the scaling exponents by means of the generalized Hurst exponent H(q). In particular we have considered H(q) for price data, absolute returns and squared returns of different empirical financial time series. We have computed H(q) for the simulated data based on the MSM models with Binomial and Lognormal distributions of the volatility components. The results demonstrate the capacity of the multifractal (MF) models to capture the stylized facts in finance, and the ability of the generalized Hurst exponents approach to detect the scaling feature of financial time series.

  15. A framework for treating DSM-5 alternative model for personality disorder features.

    PubMed

    Hopwood, Christopher J

    2018-04-15

    Despite its demonstrated empirical superiority over the DSM-5 Section 2 categorical model of personality disorders for organizing the features of personality pathology, limitations remain with regard to the translation of the DSM-5 Section 3 alternative model of personality disorders (AMPD) to clinical practice. The goal of this paper is to outline a general and preliminary framework for approaching treatment from the perspective of the AMPD. Specific techniques are discussed for the assessment and treatment of both Criterion A personality dysfunction and Criterion B maladaptive traits. A concise and step-by-step model is presented for clinical decision making with the AMPD, in the hopes of offering clinicians a framework for treating personality pathology and promoting further research on the clinical utility of the AMPD. Copyright © 2018 John Wiley & Sons, Ltd. Copyright © 2018 John Wiley & Sons, Ltd.

  16. A management and optimisation model for water supply planning in water deficit areas

    NASA Astrophysics Data System (ADS)

    Molinos-Senante, María; Hernández-Sancho, Francesc; Mocholí-Arce, Manuel; Sala-Garrido, Ramón

    2014-07-01

    The integrated water resources management approach has proven to be a suitable option for efficient, equitable and sustainable water management. In water-poor regions experiencing acute and/or chronic shortages, optimisation techniques are a useful tool for supporting the decision process of water allocation. In order to maximise the value of water use, an optimisation model was developed which involves multiple supply sources (conventional and non-conventional) and multiple users. Penalties, representing monetary losses in the event of an unfulfilled water demand, have been incorporated into the objective function. This model represents a novel approach which considers water distribution efficiency and the physical connections between water supply and demand points. Subsequent empirical testing using data from a Spanish Mediterranean river basin demonstrated the usefulness of the global optimisation model to solve existing water imbalances at the river basin level.

  17. Computational methods in the development of a knowledge-based system for the prediction of solid catalyst performance.

    PubMed

    Procelewska, Joanna; Galilea, Javier Llamas; Clerc, Frederic; Farrusseng, David; Schüth, Ferdi

    2007-01-01

    The objective of this work is the construction of a correlation between characteristics of heterogeneous catalysts, encoded in a descriptor vector, and their experimentally measured performances in the propene oxidation reaction. In this paper the key issue in the modeling process, namely the selection of adequate input variables, is explored. Several data-driven feature selection strategies were applied in order to obtain an estimate of the differences in variance and information content of various attributes, furthermore to compare their relative importance. Quantitative property activity relationship techniques using probabilistic neural networks have been used for the creation of various semi-empirical models. Finally, a robust classification model, assigning selected attributes of solid compounds as input to an appropriate performance class in the model reaction was obtained. It has been evident that the mathematical support for the primary attributes set proposed by chemists can be highly desirable.

  18. Understanding climate: A strategy for climate modeling and predictability research, 1985-1995

    NASA Technical Reports Server (NTRS)

    Thiele, O. (Editor); Schiffer, R. A. (Editor)

    1985-01-01

    The emphasis of the NASA strategy for climate modeling and predictability research is on the utilization of space technology to understand the processes which control the Earth's climate system and it's sensitivity to natural and man-induced changes and to assess the possibilities for climate prediction on time scales of from about two weeks to several decades. Because the climate is a complex multi-phenomena system, which interacts on a wide range of space and time scales, the diversity of scientific problems addressed requires a hierarchy of models along with the application of modern empirical and statistical techniques which exploit the extensive current and potential future global data sets afforded by space observations. Observing system simulation experiments, exploiting these models and data, will also provide the foundation for the future climate space observing system, e.g., Earth observing system (EOS), 1985; Tropical Rainfall Measuring Mission (TRMM) North, et al. NASA, 1984.

  19. An empirical model of water quality for use in rapid management strategy evaluation in Southeast Queensland, Australia.

    PubMed

    de la Mare, William; Ellis, Nick; Pascual, Ricardo; Tickell, Sharon

    2012-04-01

    Simulation models have been widely adopted in fisheries for management strategy evaluation (MSE). However, in catchment management of water quality, MSE is hampered by the complexity of both decision space and the hydrological process models. Empirical models based on monitoring data provide a feasible alternative to process models; they run much faster and, by conditioning on data, they can simulate realistic responses to management actions. Using 10 years of water quality indicators from Queensland, Australia, we built an empirical model suitable for rapid MSE that reproduces the water quality variables' mean and covariance structure, adjusts the expected indicators through local management effects, and propagates effects downstream by capturing inter-site regression relationships. Empirical models enable managers to search the space of possible strategies using rapid assessment. They provide not only realistic responses in water quality indicators but also variability in those indicators, allowing managers to assess strategies in an uncertain world. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Development of a detector model for generation of synthetic radiographs of cargo containers

    NASA Astrophysics Data System (ADS)

    White, Timothy A.; Bredt, Ofelia P.; Schweppe, John E.; Runkle, Robert C.

    2008-05-01

    Creation of synthetic cargo-container radiographs that possess attributes of their empirical counterparts requires accurate models of the imaging-system response. Synthetic radiographs serve as surrogate data in studies aimed at determining system effectiveness for detecting target objects when it is impractical to collect a large set of empirical radiographs. In the case where a detailed understanding of the detector system is available, an accurate detector model can be derived from first-principles. In the absence of this detail, it is necessary to derive empirical models of the imaging-system response from radiographs of well-characterized objects. Such a case is the topic of this work, where we demonstrate the development of an empirical model of a gamma-ray radiography system with the intent of creating a detector-response model that translates uncollided photon transport calculations into realistic synthetic radiographs. The detector-response model is calibrated to field measurements of well-characterized objects thus incorporating properties such as system sensitivity, spatial resolution, contrast and noise.

Top