Sample records for model performance varied

  1. Performance Optimizing Adaptive Control with Time-Varying Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hashemi, Kelley E.

    2017-01-01

    This paper presents a new adaptive control approach that involves a performance optimization objective. The control synthesis involves the design of a performance optimizing adaptive controller from a subset of control inputs. The resulting effect of the performance optimizing adaptive controller is to modify the initial reference model into a time-varying reference model which satisfies the performance optimization requirement obtained from an optimal control problem. The time-varying reference model modification is accomplished by the real-time solutions of the time-varying Riccati and Sylvester equations coupled with the least-squares parameter estimation of the sensitivities of the performance metric. The effectiveness of the proposed method is demonstrated by an application of maneuver load alleviation control for a flexible aircraft.

  2. Performance of time-varying predictors in multilevel models under an assumption of fixed or random effects.

    PubMed

    Baird, Rachel; Maxwell, Scott E

    2016-06-01

    Time-varying predictors in multilevel models are a useful tool for longitudinal research, whether they are the research variable of interest or they are controlling for variance to allow greater power for other variables. However, standard recommendations to fix the effect of time-varying predictors may make an assumption that is unlikely to hold in reality and may influence results. A simulation study illustrates that treating the time-varying predictor as fixed may allow analyses to converge, but the analyses have poor coverage of the true fixed effect when the time-varying predictor has a random effect in reality. A second simulation study shows that treating the time-varying predictor as random may have poor convergence, except when allowing negative variance estimates. Although negative variance estimates are uninterpretable, results of the simulation show that estimates of the fixed effect of the time-varying predictor are as accurate for these cases as for cases with positive variance estimates, and that treating the time-varying predictor as random and allowing negative variance estimates performs well whether the time-varying predictor is fixed or random in reality. Because of the difficulty of interpreting negative variance estimates, 2 procedures are suggested for selection between fixed-effect and random-effect models: comparing between fixed-effect and constrained random-effect models with a likelihood ratio test or fitting a fixed-effect model when an unconstrained random-effect model produces negative variance estimates. The performance of these 2 procedures is compared. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Applicability of common stomatal conductance models in maize under varying soil moisture conditions.

    PubMed

    Wang, Qiuling; He, Qijin; Zhou, Guangsheng

    2018-07-01

    In the context of climate warming, the varying soil moisture caused by precipitation pattern change will affect the applicability of stomatal conductance models, thereby affecting the simulation accuracy of carbon-nitrogen-water cycles in ecosystems. We studied the applicability of four common stomatal conductance models including Jarvis, Ball-Woodrow-Berry (BWB), Ball-Berry-Leuning (BBL) and unified stomatal optimization (USO) models based on summer maize leaf gas exchange data from a soil moisture consecutive decrease manipulation experiment. The results showed that the USO model performed best, followed by the BBL model, BWB model, and the Jarvis model performed worst under varying soil moisture conditions. The effects of soil moisture made a difference in the relative performance among the models. By introducing a water response function, the performance of the Jarvis, BWB, and USO models improved, which decreased the normalized root mean square error (NRMSE) by 15.7%, 16.6% and 3.9%, respectively; however, the performance of the BBL model was negative, which increased the NRMSE by 5.3%. It was observed that the models of Jarvis, BWB, BBL and USO were applicable within different ranges of soil relative water content (i.e., 55%-65%, 56%-67%, 37%-79% and 37%-95%, respectively) based on the 95% confidence limits. Moreover, introducing a water response function, the applicability of the Jarvis and BWB models improved. The USO model performed best with or without introducing the water response function and was applicable under varying soil moisture conditions. Our results provide a basis for selecting appropriate stomatal conductance models under drought conditions. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Formulating Spatially Varying Performance in the Statistical Fusion Framework

    PubMed Central

    Landman, Bennett A.

    2012-01-01

    To date, label fusion methods have primarily relied either on global (e.g. STAPLE, globally weighted vote) or voxelwise (e.g. locally weighted vote) performance models. Optimality of the statistical fusion framework hinges upon the validity of the stochastic model of how a rater errs (i.e., the labeling process model). Hitherto, approaches have tended to focus on the extremes of potential models. Herein, we propose an extension to the STAPLE approach to seamlessly account for spatially varying performance by extending the performance level parameters to account for a smooth, voxelwise performance level field that is unique to each rater. This approach, Spatial STAPLE, provides significant improvements over state-of-the-art label fusion algorithms in both simulated and empirical data sets. PMID:22438513

  5. Estimating varying coefficients for partial differential equation models.

    PubMed

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2017-09-01

    Partial differential equations (PDEs) are used to model complex dynamical systems in multiple dimensions, and their parameters often have important scientific interpretations. In some applications, PDE parameters are not constant but can change depending on the values of covariates, a feature that we call varying coefficients. We propose a parameter cascading method to estimate varying coefficients in PDE models from noisy data. Our estimates of the varying coefficients are shown to be consistent and asymptotically normally distributed. The performance of our method is evaluated by a simulation study and by an empirical study estimating three varying coefficients in a PDE model arising from LIDAR data. © 2017, The International Biometric Society.

  6. Commercial absorption chiller models for evaluation of control strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koeppel, E.A.; Klein, S.A.; Mitchell, J.W.

    1995-08-01

    A steady-state computer simulation model of a direct fired double-effect water-lithium bromide absorption chiller in the parallel-flow configuration was developed from first principles. Unknown model parameters such as heat transfer coefficients were determined by matching the model`s calculated state points and coefficient of performance (COP) against nominal full-load operating data and COPs obtained from a manufacturer`s catalog. The model compares favorably with the manufacturer`s performance ratings for varying water circuit (chilled and cooling) temperatures at full load conditions and for chiller part-load performance. The model was used (1) to investigate the effect of varying the water circuit flow rates withmore » the chiller load and (2) to optimize chiller part-load performance with respect to the distribution and flow of the weak solution.« less

  7. Nonlinear control of linear parameter varying systems with applications to hypersonic vehicles

    NASA Astrophysics Data System (ADS)

    Wilcox, Zachary Donald

    The focus of this dissertation is to design a controller for linear parameter varying (LPV) systems, apply it specifically to air-breathing hypersonic vehicles, and examine the interplay between control performance and the structural dynamics design. Specifically a Lyapunov-based continuous robust controller is developed that yields exponential tracking of a reference model, despite the presence of bounded, nonvanishing disturbances. The hypersonic vehicle has time varying parameters, specifically temperature profiles, and its dynamics can be reduced to an LPV system with additive disturbances. Since the HSV can be modeled as an LPV system the proposed control design is directly applicable. The control performance is directly examined through simulations. A wide variety of applications exist that can be effectively modeled as LPV systems. In particular, flight systems have historically been modeled as LPV systems and associated control tools have been applied such as gain-scheduling, linear matrix inequalities (LMIs), linear fractional transformations (LFT), and mu-types. However, as the type of flight environments and trajectories become more demanding, the traditional LPV controllers may no longer be sufficient. In particular, hypersonic flight vehicles (HSVs) present an inherently difficult problem because of the nonlinear aerothermoelastic coupling effects in the dynamics. HSV flight conditions produce temperature variations that can alter both the structural dynamics and flight dynamics. Starting with the full nonlinear dynamics, the aerothermoelastic effects are modeled by a temperature dependent, parameter varying state-space representation with added disturbances. The model includes an uncertain parameter varying state matrix, an uncertain parameter varying non-square (column deficient) input matrix, and an additive bounded disturbance. In this dissertation, a robust dynamic controller is formulated for a uncertain and disturbed LPV system. The developed controller is then applied to a HSV model, and a Lyapunov analysis is used to prove global exponential reference model tracking in the presence of uncertainty in the state and input matrices and exogenous disturbances. Simulations with a spectrum of gains and temperature profiles on the full nonlinear dynamic model of the HSV is used to illustrate the performance and robustness of the developed controller. In addition, this work considers how the performance of the developed controller varies over a wide variety of control gains and temperature profiles and are optimized with respect to different performance metrics. Specifically, various temperature profile models and related nonlinear temperature dependent disturbances are used to characterize the relative control performance and effort for each model. Examining such metrics as a function of temperature provides a potential inroad to examine the interplay between structural/thermal protection design and control development and has application for future HSV design and control implementation.

  8. Predicting the cumulative risk of death during hospitalization by modeling weekend, weekday and diurnal mortality risks.

    PubMed

    Coiera, Enrico; Wang, Ying; Magrabi, Farah; Concha, Oscar Perez; Gallego, Blanca; Runciman, William

    2014-05-21

    Current prognostic models factor in patient and disease specific variables but do not consider cumulative risks of hospitalization over time. We developed risk models of the likelihood of death associated with cumulative exposure to hospitalization, based on time-varying risks of hospitalization over any given day, as well as day of the week. Model performance was evaluated alone, and in combination with simple disease-specific models. Patients admitted between 2000 and 2006 from 501 public and private hospitals in NSW, Australia were used for training and 2007 data for evaluation. The impact of hospital care delivered over different days of the week and or times of the day was modeled by separating hospitalization risk into 21 separate time periods (morning, day, night across the days of the week). Three models were developed to predict death up to 7-days post-discharge: 1/a simple background risk model using age, gender; 2/a time-varying risk model for exposure to hospitalization (admission time, days in hospital); 3/disease specific models (Charlson co-morbidity index, DRG). Combining these three generated a full model. Models were evaluated by accuracy, AUC, Akaike and Bayesian information criteria. There was a clear diurnal rhythm to hospital mortality in the data set, peaking in the evening, as well as the well-known 'weekend-effect' where mortality peaks with weekend admissions. Individual models had modest performance on the test data set (AUC 0.71, 0.79 and 0.79 respectively). The combined model which included time-varying risk however yielded an average AUC of 0.92. This model performed best for stays up to 7-days (93% of admissions), peaking at days 3 to 5 (AUC 0.94). Risks of hospitalization vary not just with the day of the week but also time of the day, and can be used to make predictions about the cumulative risk of death associated with an individual's hospitalization. Combining disease specific models with such time varying- estimates appears to result in robust predictive performance. Such risk exposure models should find utility both in enhancing standard prognostic models as well as estimating the risk of continuation of hospitalization.

  9. A predictive model of flight crew performance in automated air traffic control and flight management operations

    DOT National Transportation Integrated Search

    1995-01-01

    Prepared ca. 1995. This paper describes Air-MIDAS, a model of pilot performance in interaction with varied levels of automation in flight management operations. The model was used to predict the performance of a two person flight crew responding to c...

  10. Varying execution discipline to increase performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, P.L.; Maccabe, A.B.

    1993-12-22

    This research investigates the relationship between execution discipline and performance. The hypothesis has two parts: 1. Different execution disciplines exhibit different performance for different computations, and 2. These differences can be effectively predicted by heuristics. A machine model is developed that can vary its execution discipline. That is, the model can execute a given program using either the control-driven, data-driven or demand-driven execution discipline. This model is referred to as a ``variable-execution-discipline`` machine. The instruction set for the model is the Program Dependence Web (PDW). The first part of the hypothesis will be tested by simulating the execution of themore » machine model on a suite of computations, based on the Livermore Fortran Kernel (LFK) Test (a.k.a. the Livermore Loops), using all three execution disciplines. Heuristics are developed to predict relative performance. These heuristics predict (a) the execution time under each discipline for one iteration of each loop and (b) the number of iterations taken by that loop; then the heuristics use those predictions to develop a prediction for the execution of the entire loop. Similar calculations are performed for branch statements. The second part of the hypothesis will be tested by comparing the results of the simulated execution with the predictions produced by the heuristics. If the hypothesis is supported, then the door is open for the development of machines that can vary execution discipline to increase performance.« less

  11. Analysis of rosen piezoelectric transformers with a varying cross-section.

    PubMed

    Xue, H; Yang, J; Hu, Y

    2008-07-01

    We study the effects of a varying cross-section on the performance of Rosen piezoelectric transformers operating with length extensional modes of rods. A theoretical analysis is performed using an extended version of a one-dimensional model developed in a previous paper. Numerical results based on the theoretical analysis are presented.

  12. Pulsed Inductive Plasma Acceleration: Performance Optimization Criteria

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.

    2014-01-01

    Optimization criteria for pulsed inductive plasma acceleration are developed using an acceleration model consisting of a set of coupled circuit equations describing the time-varying current in the thruster and a one-dimensional momentum equation. The model is nondimensionalized, resulting in the identification of several scaling parameters that are varied to optimize the performance of the thruster. The analysis reveals the benefits of underdamped current waveforms and leads to a performance optimization criterion that requires the matching of the natural period of the discharge and the acceleration timescale imposed by the inertia of the working gas. In addition, the performance increases when a greater fraction of the propellant is initially located nearer to the inductive acceleration coil. While the dimensionless model uses a constant temperature formulation in calculating performance, the scaling parameters that yield the optimum performance are shown to be relatively invariant if a self-consistent description of energy in the plasma is instead used.

  13. Acoustic results of the Boeing model 360 whirl tower test

    NASA Astrophysics Data System (ADS)

    Watts, Michael E.; Jordan, David

    1990-09-01

    An evaluation is presented for whirl tower test results of the Model 360 helicopter's advanced, high-performance four-bladed composite rotor system intended to facilitate over-200-knot flight. During these performance measurements, acoustic data were acquired by seven microphones. A comparison of whirl-tower tests with theory indicate that theoretical prediction accuracies vary with both microphone position and the inclusion of ground reflection. Prediction errors varied from 0 to 40 percent of the measured signal-to-peak amplitude.

  14. Path loss modeling and performance trade-off study for short-range non-line-of-sight ultraviolet communications.

    PubMed

    Chen, Gang; Xu, Zhengyuan; Ding, Haipeng; Sadler, Brian

    2009-03-02

    We consider outdoor non-line-of-sight deep ultraviolet (UV) solar blind communications at ranges up to 100 m, with different transmitter and receiver geometries. We propose an empirical channel path loss model, and fit the model based on extensive measurements. We observe range-dependent power decay with a power exponent that varies from 0.4 to 2.4 with varying geometry. We compare with the single scattering model, and show that the single scattering assumption leads to a model that is not accurate for small apex angles. Our model is then used to study fundamental communication system performance trade-offs among transmitted optical power, range, link geometry, data rate, and bit error rate. Both weak and strong solar background radiation scenarios are considered to bound detection performance. These results provide guidelines to system design.

  15. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  16. Using video modeling with substitutable loops to teach varied play to children with autism.

    PubMed

    Dupere, Sally; MacDonald, Rebecca P F; Ahearn, William H

    2013-01-01

    Children with autism often engage in repetitive play with little variation in the actions performed or items used. This study examined the use of video modeling with scripted substitutable loops on children's pretend play with trained and untrained characters. Three young children with autism were shown a video model of scripted toy play that included a substitutable loop that allowed various characters to perform the same actions and vocalizations. Three characters were modeled with the substitutable loop during training sessions, and 3 additional characters were present in the video but never modeled. Following video modeling, all the participants incorporated untrained characters into their play, but the extent to which they did so varied. © Society for the Experimental Analysis of Behavior.

  17. Variability among electronic cigarettes in the pressure drop, airflow rate, and aerosol production.

    PubMed

    Williams, Monique; Talbot, Prue

    2011-12-01

    This study investigated the performance of electronic cigarettes (e-cigarettes), compared different models within a brand, compared identical copies of the same model within a brand, and examined performance using different protocols. Airflow rate required to generate aerosol, pressure drop across e-cigarettes, and aerosol density were examined using three different protocols. First 10 puff protocol: The airflow rate required to produce aerosol and aerosol density varied among brands, while pressure drop varied among brands and between the same model within a brand. Total air hole area correlated with pressure drop for some brands. Smoke-out protocol: E-cigarettes within a brand generally performed similarly when puffed to exhaustion; however, there was considerable variation between brands in pressure drop, airflow rate required to produce aerosol, and the total number of puffs produced. With this protocol, aerosol density varied significantly between puffs and gradually declined. CONSECUTIVE TRIAL PROTOCOL: Two copies of one model were subjected to 11 puffs in three consecutive trials with breaks between trials. One copy performed similarly in each trial, while the second copy of the same model produced little aerosol during the third trial. The different performance properties of the two units were attributed to the atomizers. There was significant variability between and within brands in the airflow rate required to produce aerosol, pressure drop, length of time cartridges lasted, and production of aerosol. Variation in performance properties within brands suggests a need for better quality control during e-cigarette manufacture.

  18. Sediment delivery modeling in practice: Comparing the effects of watershed characteristics and data resolution across hydroclimatic regions.

    PubMed

    Hamel, Perrine; Falinski, Kim; Sharp, Richard; Auerbach, Daniel A; Sánchez-Canales, María; Dennedy-Frank, P James

    2017-02-15

    Geospatial models are commonly used to quantify sediment contributions at the watershed scale. However, the sensitivity of these models to variation in hydrological and geomorphological features, in particular to land use and topography data, remains uncertain. Here, we assessed the performance of one such model, the InVEST sediment delivery model, for six sites comprising a total of 28 watersheds varying in area (6-13,500km 2 ), climate (tropical, subtropical, mediterranean), topography, and land use/land cover. For each site, we compared uncalibrated and calibrated model predictions with observations and alternative models. We then performed correlation analyses between model outputs and watershed characteristics, followed by sensitivity analyses on the digital elevation model (DEM) resolution. Model performance varied across sites (overall r 2 =0.47), but estimates of the magnitude of specific sediment export were as or more accurate than global models. We found significant correlations between metrics of sediment delivery and watershed characteristics, including erosivity, suggesting that empirical relationships may ultimately be developed for ungauged watersheds. Model sensitivity to DEM resolution varied across and within sites, but did not correlate with other observed watershed variables. These results were corroborated by sensitivity analyses performed on synthetic watersheds ranging in mean slope and DEM resolution. Our study provides modelers using InVEST or similar geospatial sediment models with practical insights into model behavior and structural uncertainty: first, comparison of model predictions across regions is possible when environmental conditions differ significantly; second, local knowledge on the sediment budget is needed for calibration; and third, model outputs often show significant sensitivity to DEM resolution. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Identification of Time-Varying Pilot Control Behavior in Multi-Axis Control Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2012-01-01

    Recent developments in fly-by-wire control architectures for rotorcraft have introduced new interest in the identification of time-varying pilot control behavior in multi-axis control tasks. In this paper a maximum likelihood estimation method is used to estimate the parameters of a pilot model with time-dependent sigmoid functions to characterize time-varying human control behavior. An experiment was performed by 9 general aviation pilots who had to perform a simultaneous roll and pitch control task with time-varying aircraft dynamics. In 8 different conditions, the axis containing the time-varying dynamics and the growth factor of the dynamics were varied, allowing for an analysis of the performance of the estimation method when estimating time-dependent parameter functions. In addition, a detailed analysis of pilots adaptation to the time-varying aircraft dynamics in both the roll and pitch axes could be performed. Pilot control behavior in both axes was significantly affected by the time-varying aircraft dynamics in roll and pitch, and by the growth factor. The main effect was found in the axis that contained the time-varying dynamics. However, pilot control behavior also changed over time in the axis not containing the time-varying aircraft dynamics. This indicates that some cross coupling exists in the perception and control processes between the roll and pitch axes.

  20. Image Discrimination Models With Stochastic Channel Selection

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Many models of human image processing feature a large fixed number of channels representing cortical units varying in spatial position (visual field direction and eccentricity) and spatial frequency (radial frequency and orientation). The values of these parameters are usually sampled at fixed values selected to ensure adequate overlap considering the bandwidth and/or spread parameters, which are usually fixed. Even high levels of overlap does not always ensure that the performance of the model will vary smoothly with image translation or scale changes. Physiological measurements of bandwidth and/or spread parameters result in a broad distribution of estimated parameter values and the prediction of some psychophysical results are facilitated by the assumption that these parameters also take on a range of values. Selecting a sample of channels from a continuum of channels rather than using a fixed set can make model performance vary smoothly with changes in image position, scale, and orientation. It also facilitates the addition of spatial inhomogeneity, nonlinear feature channels, and focus of attention to channel models.

  1. EFFECTS OF VERTICAL-LAYER STRUCTURE AND BOUNDARY CONDITIONS ON CMAQ-V4.5 AND V4.6 MODELS

    EPA Science Inventory

    This work is aimed at determining whether the increased vertical layers in CMAQ provides substantially improved model performance and assess whether using the spatially and temporally varying boundary conditions from GEOS-CHEM offer improved model performance as compared to the d...

  2. Time-varying volatility in Malaysian stock exchange: An empirical study using multiple-volatility-shift fractionally integrated model

    NASA Astrophysics Data System (ADS)

    Cheong, Chin Wen

    2008-02-01

    This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.

  3. Improved LTVMPC design for steering control of autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Velhal, Shridhar; Thomas, Susy

    2017-01-01

    An improved linear time varying model predictive control for steering control of autonomous vehicle running on slippery road is presented. Control strategy is designed such that the vehicle will follow the predefined trajectory with highest possible entry speed. In linear time varying model predictive control, nonlinear vehicle model is successively linearized at each sampling instant. This linear time varying model is used to design MPC which will predict the future horizon. By incorporating predicted input horizon in each successive linearization the effectiveness of controller has been improved. The tracking performance using steering with front wheel and braking at four wheels are presented to illustrate the effectiveness of the proposed method.

  4. Numerical modeling of rapidly varying flows using HEC-RAS and WSPG models.

    PubMed

    Rao, Prasada; Hromadka, Theodore V

    2016-01-01

    The performance of two popular hydraulic models (HEC-RAS and WSPG) for modeling hydraulic jump in an open channel is investigated. The numerical solutions are compared with a new experimental data set obtained for varying channel bottom slopes and flow rates. Both the models satisfactorily predict the flow depths and location of the jump. The end results indicate that the numerical models output is sensitive to the value of chosen roughness coefficient. For this application, WSPG model is easier to implement with few input variables.

  5. Sensitivity of Rainfall-runoff Model Parametrization and Performance to Potential Evaporation Inputs

    NASA Astrophysics Data System (ADS)

    Jayathilake, D. I.; Smith, T. J.

    2017-12-01

    Many watersheds of interest are confronted with insufficient data and poor process understanding. Therefore, understanding the relative importance of input data types and the impact of different qualities on model performance, parameterization, and fidelity is critically important to improving hydrologic models. In this paper, the change in model parameterization and performance are explored with respect to four different potential evapotranspiration (PET) products of varying quality. For each PET product, two widely used, conceptual rainfall-runoff models are calibrated with multiple objective functions to a sample of 20 basins included in the MOPEX data set and analyzed to understand how model behavior varied. Model results are further analyzed by classifying catchments as energy- or water-limited using the Budyko framework. The results demonstrated that model fit was largely unaffected by the quality of the PET inputs. However, model parameterizations were clearly sensitive to PET inputs, as their production parameters adjusted to counterbalance input errors. Despite this, changes in model robustness were not observed for either model across the four PET products, although robustness was affected by model structure.

  6. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  7. Handling Qualities of Model Reference Adaptive Controllers with Varying Complexity for Pitch-Roll Coupled Failures

    NASA Technical Reports Server (NTRS)

    Schaefer, Jacob; Hanson, Curt; Johnson, Marcus A.; Nguyen, Nhan

    2011-01-01

    Three model reference adaptive controllers (MRAC) with varying levels of complexity were evaluated on a high performance jet aircraft and compared along with a baseline nonlinear dynamic inversion controller. The handling qualities and performance of the controllers were examined during failure conditions that induce coupling between the pitch and roll axes. Results from flight tests showed with a roll to pitch input coupling failure, the handling qualities went from Level 2 with the baseline controller to Level 1 with the most complex MRAC tested. A failure scenario with the left stabilator frozen also showed improvement with the MRAC. Improvement in performance and handling qualities was generally seen as complexity was incrementally added; however, added complexity usually corresponds to increased verification and validation effort required for certification. The tradeoff between complexity and performance is thus important to a controls system designer when implementing an adaptive controller on an aircraft. This paper investigates this relation through flight testing of several controllers of vary complexity.

  8. Experimental aerodynamic and acoustic model testing of the Variable Cycle Engine (VCE) testbed coannular exhaust nozzle system

    NASA Technical Reports Server (NTRS)

    Nelson, D. P.; Morris, P. M.

    1980-01-01

    Aerodynamic performance and jet noise characteristics of a one sixth scale model of the variable cycle engine testbed exhaust system were obtained in a series of static tests over a range of simulated engine operating conditions. Model acoustic data were acquired. Data were compared to predictions of coannular model nozzle performance. The model, tested with an without a hardwall ejector, had a total flow area equivalent to a 0.127 meter (5 inch) diameter conical nozzle with a 0.65 fan to primary nozzle area ratio and a 0.82 fan nozzle radius ratio. Fan stream temperatures and velocities were varied from 422 K to 1089 K (760 R to 1960 R) and 434 to 755 meters per second (1423 to 2477 feet per second). Primary stream properties were varied from 589 to 1089 K (1060 R to 1960 R) and 353 to 600 meters per second (1158 to 1968 feet per second). Exhaust plume velocity surveys were conducted at one operating condition with and without the ejector installed. Thirty aerodynamic performance data points were obtained with an unheated air supply. Fan nozzle pressure ratio was varied from 1.8 to 3.2 at a constant primary pressure ratio of 1.6; primary pressure ratio was varied from 1.4 to 2.4 while holding fan pressure ratio constant at 2.4. Operation with the ejector increased nozzle thrust coefficient 0.2 to 0.4 percent.

  9. Predicting nitrogen loading with land-cover composition: how can watershed size affect model performance?

    PubMed

    Zhang, Tao; Yang, Xiaojun

    2013-01-01

    Watershed-wide land-cover proportions can be used to predict the in-stream non-point source pollutant loadings through regression modeling. However, the model performance can vary greatly across different study sites and among various watersheds. Existing literature has shown that this type of regression modeling tends to perform better for large watersheds than for small ones, and that such a performance variation has been largely linked with different interwatershed landscape heterogeneity levels. The purpose of this study is to further examine the previously mentioned empirical observation based on a set of watersheds in the northern part of Georgia (USA) to explore the underlying causes of the variation in model performance. Through the combined use of the neutral landscape modeling approach and a spatially explicit nutrient loading model, we tested whether the regression model performance variation over the watershed groups ranging in size is due to the different watershed landscape heterogeneity levels. We adopted three neutral landscape modeling criteria that were tied with different similarity levels in watershed landscape properties and used the nutrient loading model to estimate the nitrogen loads for these neutral watersheds. Then we compared the regression model performance for the real and neutral landscape scenarios, respectively. We found that watershed size can affect the regression model performance both directly and indirectly. Along with the indirect effect through interwatershed heterogeneity, watershed size can directly affect the model performance over the watersheds varying in size. We also found that the regression model performance can be more significantly affected by other physiographic properties shaping nitrogen delivery effectiveness than the watershed land-cover heterogeneity. This study contrasts with many existing studies because it goes beyond hypothesis formulation based on empirical observations and into hypothesis testing to explore the fundamental mechanism.

  10. LiDAR based prediction of forest biomass using hierarchical models with spatially varying coefficients

    USGS Publications Warehouse

    Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.

    2015-01-01

    Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.

  11. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  12. Leveraging organismal biology to forecast the effects of climate change.

    PubMed

    Buckley, Lauren B; Cannistra, Anthony F; John, Aji

    2018-04-26

    Despite the pressing need for accurate forecasts of ecological and evolutionary responses to environmental change, commonly used modelling approaches exhibit mixed performance because they omit many important aspects of how organisms respond to spatially and temporally variable environments. Integrating models based on organismal phenotypes at the physiological, performance and fitness levels can improve model performance. We summarize current limitations of environmental data and models and discuss potential remedies. The paper reviews emerging techniques for sensing environments at fine spatial and temporal scales, accounting for environmental extremes, and capturing how organisms experience the environment. Intertidal mussel data illustrate biologically important aspects of environmental variability. We then discuss key challenges in translating environmental conditions into organismal performance including accounting for the varied timescales of physiological processes, for responses to environmental fluctuations including the onset of stress and other thresholds, and for how environmental sensitivities vary across lifecycles. We call for the creation of phenotypic databases to parameterize forecasting models and advocate for improved sharing of model code and data for model testing. We conclude with challenges in organismal biology that must be solved to improve forecasts over the next decade.acclimation, biophysical models, ecological forecasting, extremes, microclimate, spatial and temporal variability.

  13. A complex valued radial basis function network for equalization of fast time varying channels.

    PubMed

    Gan, Q; Saratchandran, P; Sundararajan, N; Subramanian, K R

    1999-01-01

    This paper presents a complex valued radial basis function (RBF) network for equalization of fast time varying channels. A new method for calculating the centers of the RBF network is given. The method allows fixing the number of RBF centers even as the equalizer order is increased so that a good performance is obtained by a high-order RBF equalizer with small number of centers. Simulations are performed on time varying channels using a Rayleigh fading channel model to compare the performance of our RBF with an adaptive maximum-likelihood sequence estimator (MLSE) consisting of a channel estimator and a MLSE implemented by the Viterbi algorithm. The results show that the RBF equalizer produces superior performance with less computational complexity.

  14. A model-based analysis of extinction ratio effects on phase-OTDR distributed acoustic sensing system performance

    NASA Astrophysics Data System (ADS)

    Aktas, Metin; Maral, Hakan; Akgun, Toygar

    2018-02-01

    Extinction ratio is an inherent limiting factor that has a direct effect on the detection performance of phase-OTDR based distributed acoustics sensing systems. In this work we present a model based analysis of Rayleigh scattering to simulate the effects of extinction ratio on the received signal under varying signal acquisition scenarios and system parameters. These signal acquisition scenarios are constructed to represent typically observed cases such as multiple vibration sources cluttered around the target vibration source to be detected, continuous wave light sources with center frequency drift, varying fiber optic cable lengths and varying ADC bit resolutions. Results show that an insufficient ER can result in high optical noise floor and effectively hide the effects of elaborate system improvement efforts.

  15. Modeling vegetative filter performance with VFSMOD

    Treesearch

    Matthew J. Helmers; Dean E. Eisenhauer; Michael G. Dosskey; Thomas G. Franti

    2002-01-01

    The model VFSMOD was used to investigate the effect of varying watershed characteristics and buffer dimensions on the sediment trapping efficiency of vegetative filters. This investigation allows for a better understanding of how watershed characteristics, buffer dimensions, and storm characteristics impact the performance of vegetative filters. Using VFSMOD,...

  16. Generalized semiparametric varying-coefficient models for longitudinal data

    NASA Astrophysics Data System (ADS)

    Qi, Li

    In this dissertation, we investigate the generalized semiparametric varying-coefficient models for longitudinal data that can flexibly model three types of covariate effects: time-constant effects, time-varying effects, and covariate-varying effects, i.e., the covariate effects that depend on other possibly time-dependent exposure variables. First, we consider the model that assumes the time-varying effects are unspecified functions of time while the covariate-varying effects are parametric functions of an exposure variable specified up to a finite number of unknown parameters. The estimation procedures are developed using multivariate local linear smoothing and generalized weighted least squares estimation techniques. The asymptotic properties of the proposed estimators are established. The simulation studies show that the proposed methods have satisfactory finite sample performance. ACTG 244 clinical trial of HIV infected patients are applied to examine the effects of antiretroviral treatment switching before and after HIV developing the 215-mutation. Our analysis shows benefit of treatment switching before developing the 215-mutation. The proposed methods are also applied to the STEP study with MITT cases showing that they have broad applications in medical research.

  17. Bayesian dynamic modeling of time series of dengue disease case counts

    PubMed Central

    López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-01-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941

  18. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  19. Stage-discharge relationship in tidal channels

    NASA Astrophysics Data System (ADS)

    Kearney, W. S.; Mariotti, G.; Deegan, L.; Fagherazzi, S.

    2016-12-01

    Long-term records of the flow of water through tidal channels are essential to constrain the budgets of sediments and biogeochemical compounds in salt marshes. Statistical models which relate discharge to water level allow the estimation of such records from more easily obtained records of water stage in the channel. While there is clearly structure in the stage-discharge relationship, nonlinearity and nonstationarity of the relationship complicates the construction of statistical stage-discharge models with adequate performance for discharge estimation and uncertainty quantification. Here we compare four different types of stage-discharge models, each of which is designed to capture different characteristics of the stage-discharge relationship. We estimate and validate each of these models on a two-month long time series of stage and discharge obtained with an Acoustic Doppler Current Profiler in a salt marsh channel. We find that the best performance is obtained by models which account for the nonlinear and time-varying nature of the stage-discharge relationship. Good performance can also be obtained from a simplified version of these models which approximates the fully nonlinear and time-varying models with a piecewise linear formulation.

  20. Numerical simulation of double front detonations in a non-ideal explosive with varying aluminum concentration

    NASA Astrophysics Data System (ADS)

    Kim, Wuhyun; Gwak, Min-Cheol; Yoh, Jack; Seoul National University Team

    2017-06-01

    The performance characteristics of aluminized HMX are considered by varying the aluminum (Al) concentration in a hybrid non-ideal detonation model. Two cardinal observations are reported: a decrease in detonation velocity with an increase in Al concentration and a double front detonation (DFD) feature when aerobic Al reaction occurs behind the front. While experimental studies have been reported on the effect of Al concentration on both gas-phase and solid-phase detonations, the numerical investigations were limited to only gas-phase detonation for the varying Al concentration. In the current study, a two-phase model is utilized for understanding the volumetric effects of Al concentration in the condensed phase detonations. A series of unconfined and confined rate sticks are considered for characterizing the performance of aluminized HMX with a maximum Al concentration of 50%. The simulated results are compared with the experimental data for 5%-25% concentrations, and the formation of DFD structure under varying Al concentration (0%-50%) in HMX is investigated.

  1. Static and Wind Tunnel Aero-Performance Tests of NASA AST Separate Flow Nozzle Noise Reduction Configurations

    NASA Technical Reports Server (NTRS)

    Mikkelsen, Kevin L.; McDonald, Timothy J.; Saiyed, Naseem (Technical Monitor)

    2001-01-01

    This report presents the results of cold flow model tests to determine the static and wind tunnel performance of several NASA AST separate flow nozzle noise reduction configurations. The tests were conducted by Aero Systems Engineering, Inc., for NASA Glenn Research Center. The tests were performed in the Channels 14 and 6 static thrust stands and the Channel 10 transonic wind tunnel at the FluiDyne Aerodynamics Laboratory in Plymouth, Minnesota. Facility checkout tests were made using standard ASME long-radius metering nozzles. These tests demonstrated facility data accuracy at flow conditions similar to the model tests. Channel 14 static tests reported here consisted of 21 ASME nozzle facility checkout tests and 57 static model performance tests (including 22 at no charge). Fan nozzle pressure ratio varied from 1.4 to 2.0, and fan to core total pressure ratio varied from 1.0 to 1.19. Core to fan total temperature ratio was 1.0. Channel 10 wind tunnel tests consisted of 15 tests at Mach number 0.28 and 31 tests at Mach 0.8. The sting was checked out statically in Channel 6 before the wind tunnel tests. In the Channel 6 facility, 12 ASME nozzle data points were taken and 7 model data points were taken. In the wind tunnel, fan nozzle pressure ratio varied from 1.73 to 2.8, and fan to core total pressure ratio varied from 1.0 to 1.19. Core to fan total temperature ratio was 1.0. Test results include thrust coefficients, thrust vector angle, core and fan nozzle discharge coefficients, total pressure and temperature charging station profiles, and boat-tail static pressure distributions in the wind tunnel.

  2. Estimation of Time-Varying Pilot Model Parameters

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2011-01-01

    Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.

  3. Personalized long-term prediction of cognitive function: Using sequential assessments to improve model performance.

    PubMed

    Chi, Chih-Lin; Zeng, Wenjun; Oh, Wonsuk; Borson, Soo; Lenskaia, Tatiana; Shen, Xinpeng; Tonellato, Peter J

    2017-12-01

    Prediction of onset and progression of cognitive decline and dementia is important both for understanding the underlying disease processes and for planning health care for populations at risk. Predictors identified in research studies are typically accessed at one point in time. In this manuscript, we argue that an accurate model for predicting cognitive status over relatively long periods requires inclusion of time-varying components that are sequentially assessed at multiple time points (e.g., in multiple follow-up visits). We developed a pilot model to test the feasibility of using either estimated or observed risk factors to predict cognitive status. We developed two models, the first using a sequential estimation of risk factors originally obtained from 8 years prior, then improved by optimization. This model can predict how cognition will change over relatively long time periods. The second model uses observed rather than estimated time-varying risk factors and, as expected, results in better prediction. This model can predict when newly observed data are acquired in a follow-up visit. Performances of both models that are evaluated in10-fold cross-validation and various patient subgroups show supporting evidence for these pilot models. Each model consists of multiple base prediction units (BPUs), which were trained using the same set of data. The difference in usage and function between the two models is the source of input data: either estimated or observed data. In the next step of model refinement, we plan to integrate the two types of data together to flexibly predict dementia status and changes over time, when some time-varying predictors are measured only once and others are measured repeatedly. Computationally, both data provide upper and lower bounds for predictive performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Back Propagation Neural Network Model for Predicting the Performance of Immobilized Cell Biofilters Handling Gas-Phase Hydrogen Sulphide and Ammonia

    PubMed Central

    Rene, Eldon R.; López, M. Estefanía; Kim, Jung Hoon; Park, Hung Suck

    2013-01-01

    Lab scale studies were conducted to evaluate the performance of two simultaneously operated immobilized cell biofilters (ICBs) for removing hydrogen sulphide (H2S) and ammonia (NH3) from gas phase. The removal efficiencies (REs) of the biofilter treating H2S varied from 50 to 100% at inlet loading rates (ILRs) varying up to 13 g H2S/m3 ·h, while the NH3 biofilter showed REs ranging from 60 to 100% at ILRs varying between 0.5 and 5.5 g NH3/m3 ·h. An application of the back propagation neural network (BPNN) to predict the performance parameter, namely, RE (%) using this experimental data is presented in this paper. The input parameters to the network were unit flow (per min) and inlet concentrations (ppmv), respectively. The accuracy of BPNN-based model predictions were evaluated by providing the trained network topology with a test dataset and also by calculating the regression coefficient (R 2) values. The results from this predictive modeling work showed that BPNNs were able to predict the RE of both the ICBs efficiently. PMID:24307999

  5. Comparative study of transient hydraulic tomography with varying parameterizations and zonations: Laboratory sandbox investigation

    NASA Astrophysics Data System (ADS)

    Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.

    2017-11-01

    Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.

  6. Propulsive performance of pitching foils with variable chordwise flexibility

    NASA Astrophysics Data System (ADS)

    Zeyghami, Samane; Moored, Keith; Lehigh University Team

    2017-11-01

    Many swimming and flying animals propel themselves efficiently through water by oscillating flexible fins. These fins are not homogeneously flexible, but instead their flexural stiffness varies along their chord and span. Here we seek to evaluate the effect stiffness profile on the propulsive performance of pitching foils. Stiffness profile characterizes the variation in the local fin stiffness along the chord. To this aim, we developed a low order model of a functionally-graded material where the chordwise flexibility is modeled by two torsional springs along the chordline and the stiffness and location of the springs can be varied arbitrarily. The torsional spring structural model is then strongly coupled to a boundary element fluid model to simulate the fluid-structure interactions. Keeping the leading edge kinematics unchanged, we alter the stiffness profile of the foil and allow it to swim freely in response to the resulting hydrodynamic forces. We then detail the dependency of the hydrodynamic performance and the wake structure to the variations in the local structural properties of the foil.

  7. A Functional Varying-Coefficient Single-Index Model for Functional Response Data

    PubMed Central

    Li, Jialiang; Huang, Chao; Zhu, Hongtu

    2016-01-01

    Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer’s Disease Neuroimaging Initiative (ADNI) study. PMID:29200540

  8. A Functional Varying-Coefficient Single-Index Model for Functional Response Data.

    PubMed

    Li, Jialiang; Huang, Chao; Zhu, Hongtu

    2017-01-01

    Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer's Disease Neuroimaging Initiative (ADNI) study.

  9. Opportunistic Capacity-Based Resource Allocation for Chunk-Based Multi-Carrier Cognitive Radio Sensor Networks

    PubMed Central

    Huang, Jie; Zeng, Xiaoping; Jian, Xin; Tan, Xiaoheng; Zhang, Qi

    2017-01-01

    The spectrum allocation for cognitive radio sensor networks (CRSNs) has received considerable research attention under the assumption that the spectrum environment is static. However, in practice, the spectrum environment varies over time due to primary user/secondary user (PU/SU) activity and mobility, resulting in time-varied spectrum resources. This paper studies resource allocation for chunk-based multi-carrier CRSNs with time-varied spectrum resources. We present a novel opportunistic capacity model through a continuous time semi-Markov chain (CTSMC) to describe the time-varied spectrum resources of chunks and, based on this, a joint power and chunk allocation model by considering the opportunistically available capacity of chunks is proposed. To reduce the computational complexity, we split this model into two sub-problems and solve them via the Lagrangian dual method. Simulation results illustrate that the proposed opportunistic capacity-based resource allocation algorithm can achieve better performance compared with traditional algorithms when the spectrum environment is time-varied. PMID:28106803

  10. Benchmarking hydrological model predictive capability for UK River flows and flood peaks.

    NASA Astrophysics Data System (ADS)

    Lane, Rosanna; Coxon, Gemma; Freer, Jim; Wagener, Thorsten

    2017-04-01

    Data and hydrological models are now available for national hydrological analyses. However, hydrological model performance varies between catchments, and lumped, conceptual models are not able to produce adequate simulations everywhere. This study aims to benchmark hydrological model performance for catchments across the United Kingdom within an uncertainty analysis framework. We have applied four hydrological models from the FUSE framework to 1128 catchments across the UK. These models are all lumped models and run at a daily timestep, but differ in the model structural architecture and process parameterisations, therefore producing different but equally plausible simulations. We apply FUSE over a 20 year period from 1988-2008, within a GLUE Monte Carlo uncertainty analyses framework. Model performance was evaluated for each catchment, model structure and parameter set using standard performance metrics. These were calculated both for the whole time series and to assess seasonal differences in model performance. The GLUE uncertainty analysis framework was then applied to produce simulated 5th and 95th percentile uncertainty bounds for the daily flow time-series and additionally the annual maximum prediction bounds for each catchment. The results show that the model performance varies significantly in space and time depending on catchment characteristics including climate, geology and human impact. We identify regions where models are systematically failing to produce good results, and present reasons why this could be the case. We also identify regions or catchment characteristics where one model performs better than others, and have explored what structural component or parameterisation enables certain models to produce better simulations in these catchments. Model predictive capability was assessed for each catchment, through looking at the ability of the models to produce discharge prediction bounds which successfully bound the observed discharge. These results improve our understanding of the predictive capability of simple conceptual hydrological models across the UK and help us to identify where further effort is needed to develop modelling approaches to better represent different catchment and climate typologies.

  11. Enhancement of multimodality texture-based prediction models via optimization of PET and MR image acquisition protocols: a proof of concept

    NASA Astrophysics Data System (ADS)

    Vallières, Martin; Laberge, Sébastien; Diamant, André; El Naqa, Issam

    2017-11-01

    Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice (‘span’). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of 0.84 +/- 0.01 in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters (p = 0.04 ), with an average AUC of 0.89 +/- 0.01 . Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.

  12. Model-centric distribution automation: Capacity, reliability, and efficiency

    DOE PAGES

    Onen, Ahmet; Jung, Jaesung; Dilek, Murat; ...

    2016-02-26

    A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less

  13. Model-centric distribution automation: Capacity, reliability, and efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onen, Ahmet; Jung, Jaesung; Dilek, Murat

    A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less

  14. Finite-dimensional modeling of network-induced delays for real-time control systems

    NASA Technical Reports Server (NTRS)

    Ray, Asok; Halevi, Yoram

    1988-01-01

    In integrated control systems (ICS), a feedback loop is closed by the common communication channel, which multiplexes digital data from the sensor to the controller and from the controller to the actuator along with the data traffic from other control loops and management functions. Due to asynchronous time-division multiplexing in the network access protocols, time-varying delays are introduced in the control loop, which degrade the system dynamic performance and are a potential source of instability. The delayed control system is represented by a finite-dimensional, time-varying, discrete-time model which is less complex than the existing continuous-time models for time-varying delays; this approach allows for simpler schemes for analysis and simulation of the ICS.

  15. Influence of Joint Angle on EMG-Torque Model During Constant-Posture, Torque-Varying Contractions.

    PubMed

    Liu, Pu; Liu, Lukai; Clancy, Edward A

    2015-11-01

    Relating the electromyogram (EMG) to joint torque is useful in various application areas, including prosthesis control, ergonomics and clinical biomechanics. Limited study has related EMG to torque across varied joint angles, particularly when subjects performed force-varying contractions or when optimized modeling methods were utilized. We related the biceps-triceps surface EMG of 22 subjects to elbow torque at six joint angles (spanning 60° to 135°) during constant-posture, torque-varying contractions. Three nonlinear EMG σ -torque models, advanced EMG amplitude (EMG σ ) estimation processors (i.e., whitened, multiple-channel) and the duration of data used to train models were investigated. When EMG-torque models were formed separately for each of the six distinct joint angles, a minimum "gold standard" error of 4.01±1.2% MVC(F90) resulted (i.e., error relative to maximum voluntary contraction at 90° flexion). This model structure, however, did not directly facilitate interpolation across angles. The best model which did so achieved a statistically equivalent error of 4.06±1.2% MVC(F90). Results demonstrated that advanced EMG σ processors lead to improved joint torque estimation as do longer model training durations.

  16. A varying-coefficient method for analyzing longitudinal clinical trials data with nonignorable dropout

    PubMed Central

    Forster, Jeri E.; MaWhinney, Samantha; Ball, Erika L.; Fairclough, Diane

    2011-01-01

    Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223

  17. A robust operational model for predicting where tropical cyclone waves damage coral reefs

    NASA Astrophysics Data System (ADS)

    Puotinen, Marji; Maynard, Jeffrey A.; Beeden, Roger; Radford, Ben; Williams, Gareth J.

    2016-05-01

    Tropical cyclone (TC) waves can severely damage coral reefs. Models that predict where to find such damage (the ‘damage zone’) enable reef managers to: 1) target management responses after major TCs in near-real time to promote recovery at severely damaged sites; and 2) identify spatial patterns in historic TC exposure to explain habitat condition trajectories. For damage models to meet these needs, they must be valid for TCs of varying intensity, circulation size and duration. Here, we map damage zones for 46 TCs that crossed Australia’s Great Barrier Reef from 1985-2015 using three models - including one we develop which extends the capability of the others. We ground truth model performance with field data of wave damage from seven TCs of varying characteristics. The model we develop (4MW) out-performed the other models at capturing all incidences of known damage. The next best performing model (AHF) both under-predicted and over-predicted damage for TCs of various types. 4MW and AHF produce strikingly different spatial and temporal patterns of damage potential when used to reconstruct past TCs from 1985-2015. The 4MW model greatly enhances both of the main capabilities TC damage models provide to managers, and is useful wherever TCs and coral reefs co-occur.

  18. A robust operational model for predicting where tropical cyclone waves damage coral reefs.

    PubMed

    Puotinen, Marji; Maynard, Jeffrey A; Beeden, Roger; Radford, Ben; Williams, Gareth J

    2016-05-17

    Tropical cyclone (TC) waves can severely damage coral reefs. Models that predict where to find such damage (the 'damage zone') enable reef managers to: 1) target management responses after major TCs in near-real time to promote recovery at severely damaged sites; and 2) identify spatial patterns in historic TC exposure to explain habitat condition trajectories. For damage models to meet these needs, they must be valid for TCs of varying intensity, circulation size and duration. Here, we map damage zones for 46 TCs that crossed Australia's Great Barrier Reef from 1985-2015 using three models - including one we develop which extends the capability of the others. We ground truth model performance with field data of wave damage from seven TCs of varying characteristics. The model we develop (4MW) out-performed the other models at capturing all incidences of known damage. The next best performing model (AHF) both under-predicted and over-predicted damage for TCs of various types. 4MW and AHF produce strikingly different spatial and temporal patterns of damage potential when used to reconstruct past TCs from 1985-2015. The 4MW model greatly enhances both of the main capabilities TC damage models provide to managers, and is useful wherever TCs and coral reefs co-occur.

  19. Flexible margin kinematics and vortex formation of Aurelia aurita and Robojelly.

    PubMed

    Villanueva, Alex; Vlachos, Pavlos; Priya, Shashank

    2014-01-01

    The development of a rowing jellyfish biomimetic robot termed as "Robojelly", has led to the discovery of a passive flexible flap located between the flexion point and bell margin on the Aurelia aurita. A comparative analysis of biomimetic robots showed that the presence of a passive flexible flap results in a significant increase in the swimming performance. In this work we further investigate this concept by developing varying flap geometries and comparing their kinematics with A. aurita. It was shown that the animal flap kinematics can be replicated with high fidelity using a passive structure and a flap with curved and tapered geometry gave the most biomimetic performance. A method for identifying the flap location was established by utilizing the bell curvature and the variation of curvature as a function of time. Flaps of constant cross-section and varying lengths were incorporated on the Robojelly to conduct a systematic study of the starting vortex circulation. Circulation was quantified using velocity field measurements obtained from planar Time Resolved Digital Particle Image Velocimetry (TRDPIV). The starting vortex circulation was scaled using a varying orifice model and a pitching panel model. The varying orifice model which has been traditionally considered as the better representation of jellyfish propulsion did not appear to capture the scaling of the starting vortex. In contrast, the pitching panel representation appeared to better scale the governing flow physics and revealed a strong dependence on the flap kinematics and geometry. The results suggest that an alternative description should be considered for rowing jellyfish propulsion, using a pitching panel method instead of the traditional varying orifice model. Finally, the results show the importance of incorporating the entire bell geometry as a function of time in modeling rowing jellyfish propulsion.

  20. Benchmarking novel approaches for modelling species range dynamics

    PubMed Central

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.

    2016-01-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305

  1. Benchmarking novel approaches for modelling species range dynamics.

    PubMed

    Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E

    2016-08-01

    Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.

  2. Infrared Radiography: Modeling X-ray Imaging without Harmful Radiation

    ERIC Educational Resources Information Center

    Zietz, Otto; Mylott, Elliot; Widenhorn, Ralf

    2015-01-01

    Planar x-ray imaging is a ubiquitous diagnostic tool and is routinely performed to diagnose conditions as varied as bone fractures and pneumonia. The underlying principle is that the varying attenuation coefficients of air, water, tissue, bone, or metal implants within the body result in non-uniform transmission of x-ray radiation. Through the…

  3. State Space Modeling of Time-Varying Contemporaneous and Lagged Relations in Connectivity Maps

    PubMed Central

    Molenaar, Peter C. M.; Beltz, Adriene M.; Gates, Kathleen M.; Wilson, Stephen J.

    2017-01-01

    Most connectivity mapping techniques for neuroimaging data assume stationarity (i.e., network parameters are constant across time), but this assumption does not always hold true. The authors provide a description of a new approach for simultaneously detecting time-varying (or dynamic) contemporaneous and lagged relations in brain connectivity maps. Specifically, they use a novel raw data likelihood estimation technique (involving a second-order extended Kalman filter/smoother embedded in a nonlinear optimizer) to determine the variances of the random walks associated with state space model parameters and their autoregressive components. The authors illustrate their approach with simulated and blood oxygen level-dependent functional magnetic resonance imaging data from 30 daily cigarette smokers performing a verbal working memory task, focusing on seven regions of interest (ROIs). Twelve participants had dynamic directed functional connectivity maps: Eleven had one or more time-varying contemporaneous ROI state loadings, and one had a time-varying autoregressive parameter. Compared to smokers without dynamic maps, smokers with dynamic maps performed the task with greater accuracy. Thus, accurate detection of dynamic brain processes is meaningfully related to behavior in a clinical sample. PMID:26546863

  4. Stability over Time of Different Methods of Estimating School Performance

    ERIC Educational Resources Information Center

    Dumay, Xavier; Coe, Rob; Anumendem, Dickson Nkafu

    2014-01-01

    This paper aims to investigate how stability varies with the approach used in estimating school performance in a large sample of English primary schools. The results show that (a) raw performance is considerably more stable than adjusted performance, which in turn is slightly more stable than growth model estimates; (b) schools' performance…

  5. Combined risk assessment of nonstationary monthly water quality based on Markov chain and time-varying copula.

    PubMed

    Shi, Wei; Xia, Jun

    2017-02-01

    Water quality risk management is a global hot research linkage with the sustainable water resource development. Ammonium nitrogen (NH 3 -N) and permanganate index (COD Mn ) as the focus indicators in Huai River Basin, are selected to reveal their joint transition laws based on Markov theory. The time-varying moments model with either time or land cover index as explanatory variables is applied to build the time-varying marginal distributions of water quality time series. Time-varying copula model, which takes the non-stationarity in the marginal distribution and/or the time variation in dependence structure between water quality series into consideration, is constructed to describe a bivariate frequency analysis for NH 3 -N and COD Mn series at the same monitoring gauge. The larger first-order Markov joint transition probability indicates water quality state Class V w , Class IV and Class III will occur easily in the water body of Bengbu Sluice. Both marginal distribution and copula models are nonstationary, and the explanatory variable time yields better performance than land cover index in describing the non-stationarities in the marginal distributions. In modelling the dependence structure changes, time-varying copula has a better fitting performance than the copula with the constant or the time-trend dependence parameter. The largest synchronous encounter risk probability of NH 3 -N and COD Mn simultaneously reaching Class V is 50.61%, while the asynchronous encounter risk probability is largest when NH 3 -N and COD Mn is inferior to class V and class IV water quality standards, respectively.

  6. CSMP (Continuous System Modeling Program) modeling of brushless DC motors

    NASA Astrophysics Data System (ADS)

    Thomas, S. M.

    1984-09-01

    Recent improvements in rare earth magnets have made it possible to construct strong, lightweight, high horsepower DC motors. This has occasioned a reassessment of electromechanical actuators as alternatives to comparable pneumatic and hydraulic systems for use in flight control actuators for tactical missiles. This thesis develops a low-order mathematical model for the simulation and analysis of brushless DC motor performance. This model is implemented in CSMP language. It is used to predict such motor performance curves as speed, current and power versus torque. Electronic commutation based on Hall effect sensor positional feedback is simulated. Steady state motor behavior is studied under both constant and variable air gap flux conditions. The variable flux takes two different forms. In the first case, the flux is varied as a simple sinusoid. In the second case, the flux is varied as the sum of a sinusoid and one of its harmonics.

  7. Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects.

    PubMed

    Dai, Tianjiao; Shete, Sanjay

    2016-08-30

    In a standard two-stage SMART design, the intermediate response to the first-stage intervention is measured at a fixed time point for all participants. Subsequently, responders and non-responders are re-randomized and the final outcome of interest is measured at the end of the study. To reduce the side effects and costs associated with first-stage interventions in a SMART design, we proposed a novel time-varying SMART design in which individuals are re-randomized to the second-stage interventions as soon as a pre-fixed intermediate response is observed. With this strategy, the duration of the first-stage intervention will vary. We developed a time-varying mixed effects model and a joint model that allows for modeling the outcomes of interest (intermediate and final) and the random durations of the first-stage interventions simultaneously. The joint model borrows strength from the survival sub-model in which the duration of the first-stage intervention (i.e., time to response to the first-stage intervention) is modeled. We performed a simulation study to evaluate the statistical properties of these models. Our simulation results showed that the two modeling approaches were both able to provide good estimations of the means of the final outcomes of all the embedded interventions in a SMART. However, the joint modeling approach was more accurate for estimating the coefficients of first-stage interventions and time of the intervention. We conclude that the joint modeling approach provides more accurate parameter estimates and a higher estimated coverage probability than the single time-varying mixed effects model, and we recommend the joint model for analyzing data generated from time-varying SMART designs. In addition, we showed that the proposed time-varying SMART design is cost-efficient and equally effective in selecting the optimal embedded adaptive intervention as the standard SMART design.

  8. Using finite element modelling and experimental methods to investigate planar coil sensor topologies for inductive measurement of displacement

    NASA Astrophysics Data System (ADS)

    Moreton, Gregory; Meydan, Turgut; Williams, Paul

    2018-04-01

    The usage of planar sensors is widespread due to their non-contact nature and small size profiles, however only a few basic design types are generally considered. In order to develop planar coil designs we have performed extensive finite element modelling (FEM) and experimentation to understand the performance of different planar sensor topologies when used in inductive sensing. We have applied this approach to develop a novel displacement sensor. Models of different topologies with varying pitch values have been analysed using the ANSYS Maxwell FEM package, furthermore the models incorporated a movable soft magnetic amorphous ribbon element. The different models used in the FEM were then constructed and experimentally tested with topologies that included mesh, meander, square coil, and circular coil configurations. The sensors were used to detect the displacement of the amorphous ribbon. A LabView program controlled both the displacement stage and the impedance analyser, the latter capturing the varying inductance values with ribbon displacement. There was good correlation between the FEM models and the experimental data confirming that the methodology described here offers an effective way for developing planar coil based sensors with improved performance.

  9. Advanced ion thruster research

    NASA Technical Reports Server (NTRS)

    Wilbur, P. J.

    1984-01-01

    A simple model describing the discharge chamber performance of high strength, cusped magnetic field ion thrusters is developed. The model is formulated in terms of the energy cost of producing ions in the discharge chamber and the fraction of ions produced in the discharge chamber that are extracted to form the ion beam. The accuracy of the model is verified experimentally in a series of tests wherein the discharge voltage, propellant, grid transparency to neutral atoms, beam diameter and discharge chamber wall temperature are varied. The model is exercised to demonstrate what variations in performance might be expected by varying discharge chamber parameters. The results of a study of xenon and argon orificed hollow cathodes are reported. These results suggest that a hollow cathode model developed from research conducted on mercury cathodes can also be applied to xenon and argon. Primary electron mean free paths observed in argon and xenon cathodes that are larger than those found in mercury cathodes are identified as a cause of performance differences between mercury and inert gas cathodes. Data required as inputs to the inert gas cathode model are presented so it can be used as an aid in cathode design.

  10. Development of a nonlocal convective mixing scheme with varying upward mixing rates for use in air quality and chemical transport models.

    PubMed

    Mihailović, Dragutin T; Alapaty, Kiran; Sakradzija, Mirjana

    2008-06-01

    Asymmetrical convective non-local scheme (CON) with varying upward mixing rates is developed for simulation of vertical turbulent mixing in the convective boundary layer in air quality and chemical transport models. The upward mixing rate form the surface layer is parameterized using the sensible heat flux and the friction and convective velocities. Upward mixing rates varying with height are scaled with an amount of turbulent kinetic energy in layer, while the downward mixing rates are derived from mass conservation. This scheme provides a less rapid mass transport out of surface layer into other layers than other asymmetrical convective mixing schemes. In this paper, we studied the performance of a nonlocal convective mixing scheme with varying upward mixing in the atmospheric boundary layer and its impact on the concentration of pollutants calculated with chemical and air-quality models. This scheme was additionally compared versus a local eddy-diffusivity scheme (KSC). Simulated concentrations of NO(2) and the nitrate wet deposition by the CON scheme are closer to the observations when compared to those obtained from using the KSC scheme. Concentrations calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme (of the order of 15-20%). Nitrate wet deposition calculated with the CON scheme are in general higher and closer to the observations than those obtained by the KSC scheme. To examine the performance of the scheme, simulated and measured concentrations of a pollutant (NO(2)) and nitrate wet deposition was compared for the year 2002. The comparison was made for the whole domain used in simulations performed by the chemical European Monitoring and Evaluation Programme Unified model (version UNI-ACID, rv2.0) where schemes were incorporated.

  11. Estimation of Logistic Regression Models in Small Samples. A Simulation Study Using a Weakly Informative Default Prior Distribution

    ERIC Educational Resources Information Center

    Gordovil-Merino, Amalia; Guardia-Olmos, Joan; Pero-Cebollero, Maribel

    2012-01-01

    In this paper, we used simulations to compare the performance of classical and Bayesian estimations in logistic regression models using small samples. In the performed simulations, conditions were varied, including the type of relationship between independent and dependent variable values (i.e., unrelated and related values), the type of variable…

  12. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    PubMed

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  13. Mathematical Models of the Common-Source and Common-Gate Amplifiers using a Metal-Ferroelectric-Semiconductor Field effect Transistor

    NASA Technical Reports Server (NTRS)

    Hunt, Mitchell; Sayyah, Rana; Mitchell, Cody; Laws, Crystal; MacLeod, Todd C.; Ho, Fat D.

    2013-01-01

    Mathematical models of the common-source and common-gate amplifiers using metal-ferroelectric- semiconductor field effect transistors (MOSFETs) are developed in this paper. The models are compared against data collected with MOSFETs of varying channel lengths and widths, and circuit parameters such as biasing conditions are varied as well. Considerations are made for the capacitance formed by the ferroelectric layer present between the gate and substrate of the transistors. Comparisons between the modeled and measured data are presented in depth as well as differences and advantages as compared to the performance of each circuit using a MOSFET.

  14. Optimization strategies for molecular dynamics programs on Cray computers and scalar work stations

    NASA Astrophysics Data System (ADS)

    Unekis, Michael J.; Rice, Betsy M.

    1994-12-01

    We present results of timing runs and different optimization strategies for a prototype molecular dynamics program that simulates shock waves in a two-dimensional (2-D) model of a reactive energetic solid. The performance of the program may be improved substantially by simple changes to the Fortran or by employing various vendor-supplied compiler optimizations. The optimum strategy varies among the machines used and will vary depending upon the details of the program. The effect of various compiler options and vendor-supplied subroutine calls is demonstrated. Comparison is made between two scalar workstations (IBM RS/6000 Model 370 and Model 530) and several Cray supercomputers (X-MP/48, Y-MP8/128, and C-90/16256). We find that for a scientific application program dominated by sequential, scalar statements, a relatively inexpensive high-end work station such as the IBM RS/60006 RISC series will outperform single processor performance of the Cray X-MP/48 and perform competitively with single processor performance of the Y-MP8/128 and C-9O/16256.

  15. Ball Aerospace Advances in 35 K Cooling-The SB235E Cryocooler

    NASA Astrophysics Data System (ADS)

    Lock, J. S.; Glaister, D. S.; Gully, W.; Hendershott, P.; Marquardt, E.

    2008-03-01

    This paper describes the design, development, testing and performance of the Ball Aerospace & Technologies Corp. SB235E, a 2-stage long life space cryocooler optimized for 2 cooling loads. The SB235E model is designed to provide simultaneous cooling at 35 K (typically for HgCdTe detectors) and 85 K (typically for optics). The SB235E is a higher capacity model derivative of the SB235. Initial testing of the SB235E has shown performance of 2.13 W at 35 K and 8.14 W at 85 K for 200 W power at 289 K rejection temperature. These data equate to Carnot efficiency of 0.175 or nearly twice that of other published space cryocooler data. Qualification testing has been completed including full performance mapping and vibration export. Performance mapping with the cold-stage temperature varying from 20 K to 80 K and mid-stage temperature varying from 85 K to 175 K are presented. Two engineering models of the SB235E are currently in build.

  16. The contribution of attentional lapses to individual differences in visual working memory capacity.

    PubMed

    Adam, Kirsten C S; Mance, Irida; Fukuda, Keisuke; Vogel, Edward K

    2015-08-01

    Attentional control and working memory capacity are important cognitive abilities that substantially vary between individuals. Although much is known about how attentional control and working memory capacity relate to each other and to constructs like fluid intelligence, little is known about how trial-by-trial fluctuations in attentional engagement impact trial-by-trial working memory performance. Here, we employ a novel whole-report memory task that allowed us to distinguish between varying levels of attentional engagement in humans performing a working memory task. By characterizing low-performance trials, we can distinguish between models in which working memory performance failures are caused by either (1) complete lapses of attention or (2) variations in attentional control. We found that performance failures increase with set-size and strongly predict working memory capacity. Performance variability was best modeled by an attentional control model of attention, not a lapse model. We examined neural signatures of performance failures by measuring EEG activity while participants performed the whole-report task. The number of items correctly recalled in the memory task was predicted by frontal theta power, with decreased frontal theta power associated with poor performance on the task. In addition, we found that poor performance was not explained by failures of sensory encoding; the P1/N1 response and ocular artifact rates were equivalent for high- and low-performance trials. In all, we propose that attentional lapses alone cannot explain individual differences in working memory performance. Instead, we find that graded fluctuations in attentional control better explain the trial-by-trial differences in working memory that we observe.

  17. The Contribution of Attentional Lapses to Individual Differences in Visual Working Memory Capacity

    PubMed Central

    Adam, Kirsten C. S.; Mance, Irida; Fukuda, Keisuke; Vogel, Edward K.

    2015-01-01

    Attentional control and working memory capacity are important cognitive abilities that substantially vary between individuals. Although much is known about how attentional control and working memory capacity relate to each other and to constructs like fluid intelligence, little is known about how trial-by-trial fluctuations in attentional engagement impact trial-by-trial working memory performance. Here, we employ a novel whole-report memory task that allowed us to distinguish between varying levels of attentional engagement in humans performing a working memory task. By characterizing low-performance trials, we can distinguish between models in which working memory performance failures are caused by either (1) complete lapses of attention or (2) variations in attentional control. We found that performance failures increase with set-size and strongly predict working memory capacity. Performance variability was best modeled by an attentional control model of attention, not a lapse model. We examined neural signatures of performance failures by measuring EEG activity while participants performed the whole-report task. The number of items correctly recalled in the memory task was predicted by frontal theta power, with decreased frontal theta power associated with poor performance on the task. In addition, we found that poor performance was not explained by failures of sensory encoding; the P1/N1 response and ocular artifact rates were equivalent for high- and low-performance trials. In all, we propose that attentional lapses alone cannot explain individual differences in working memory performance. Instead, we find that graded fluctuations in attentional control better explain the trial-by-trial differences in working memory that we observe. PMID:25811710

  18. Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.

    PubMed

    Ouyang, Yicun; Yin, Hujun

    2018-05-01

    Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.

  19. Prognostic models for complete recovery in ischemic stroke: a systematic review and meta-analysis.

    PubMed

    Jampathong, Nampet; Laopaiboon, Malinee; Rattanakanokchai, Siwanon; Pattanittum, Porjai

    2018-03-09

    Prognostic models have been increasingly developed to predict complete recovery in ischemic stroke. However, questions arise about the performance characteristics of these models. The aim of this study was to systematically review and synthesize performance of existing prognostic models for complete recovery in ischemic stroke. We searched journal publications indexed in PUBMED, SCOPUS, CENTRAL, ISI Web of Science and OVID MEDLINE from inception until 4 December, 2017, for studies designed to develop and/or validate prognostic models for predicting complete recovery in ischemic stroke patients. Two reviewers independently examined titles and abstracts, and assessed whether each study met the pre-defined inclusion criteria and also independently extracted information about model development and performance. We evaluated validation of the models by medians of the area under the receiver operating characteristic curve (AUC) or c-statistic and calibration performance. We used a random-effects meta-analysis to pool AUC values. We included 10 studies with 23 models developed from elderly patients with a moderately severe ischemic stroke, mainly in three high income countries. Sample sizes for each study ranged from 75 to 4441. Logistic regression was the only analytical strategy used to develop the models. The number of various predictors varied from one to 11. Internal validation was performed in 12 models with a median AUC of 0.80 (95% CI 0.73 to 0.84). One model reported good calibration. Nine models reported external validation with a median AUC of 0.80 (95% CI 0.76 to 0.82). Four models showed good discrimination and calibration on external validation. The pooled AUC of the two validation models of the same developed model was 0.78 (95% CI 0.71 to 0.85). The performance of the 23 models found in the systematic review varied from fair to good in terms of internal and external validation. Further models should be developed with internal and external validation in low and middle income countries.

  20. Opinion formation in time-varying social networks: The case of the naming game

    NASA Astrophysics Data System (ADS)

    Maity, Suman Kalyan; Manoj, T. Venkat; Mukherjee, Animesh

    2012-09-01

    We study the dynamics of the naming game as an opinion formation model on time-varying social networks. This agent-based model captures the essential features of the agreement dynamics by means of a memory-based negotiation process. Our study focuses on the impact of time-varying properties of the social network of the agents on the naming game dynamics. In particular, we perform a computational exploration of this model using simulations on top of real networks. We investigate the outcomes of the dynamics on two different types of time-varying data: (1) the networks vary on a day-to-day basis and (2) the networks vary within very short intervals of time (20 sec). In the first case, we find that networks with strong community structure hinder the system from reaching global agreement; the evolution of the naming game in these networks maintains clusters of coexisting opinions indefinitely leading to metastability. In the second case, we investigate the evolution of the naming game in perfect synchronization with the time evolution of the underlying social network shedding new light on the traditional emergent properties of the game that differ largely from what has been reported in the existing literature.

  1. Using the power balance model to simulate cross-country skiing on varying terrain.

    PubMed

    Moxnes, John F; Sandbakk, Oyvind; Hausken, Kjell

    2014-01-01

    The current study adapts the power balance model to simulate cross-country skiing on varying terrain. We assumed that the skier's locomotive power at a self-chosen pace is a function of speed, which is impacted by friction, incline, air drag, and mass. An elite male skier's position along the track during ski skating was simulated and compared with his experimental data. As input values in the model, air drag and friction were estimated from the literature based on the skier's mass, snow conditions, and speed. We regard the fit as good, since the difference in racing time between simulations and measurements was 2 seconds of the 815 seconds racing time, with acceptable fit both in uphill and downhill terrain. Using this model, we estimated the influence of changes in various factors such as air drag, friction, and body mass on performance. In conclusion, the power balance model with locomotive power as a function of speed was found to be a valid tool for analyzing performance in cross-country skiing.

  2. Surface Wave Tomography with Spatially Varying Smoothing Based on Continuous Model Regionalization

    NASA Astrophysics Data System (ADS)

    Liu, Chuanming; Yao, Huajian

    2017-03-01

    Surface wave tomography based on continuous regionalization of model parameters is widely used to invert for 2-D phase or group velocity maps. An inevitable problem is that the distribution of ray paths is far from homogeneous due to the spatially uneven distribution of stations and seismic events, which often affects the spatial resolution of the tomographic model. We present an improved tomographic method with a spatially varying smoothing scheme that is based on the continuous regionalization approach. The smoothness of the inverted model is constrained by the Gaussian a priori model covariance function with spatially varying correlation lengths based on ray path density. In addition, a two-step inversion procedure is used to suppress the effects of data outliers on tomographic models. Both synthetic and real data are used to evaluate this newly developed tomographic algorithm. In the synthetic tests, when the contrived model has different scales of anomalies but with uneven ray path distribution, we compare the performance of our spatially varying smoothing method with the traditional inversion method, and show that the new method is capable of improving the recovery in regions of dense ray sampling. For real data applications, the resulting phase velocity maps of Rayleigh waves in SE Tibet produced using the spatially varying smoothing method show similar features to the results with the traditional method. However, the new results contain more detailed structures and appears to better resolve the amplitude of anomalies. From both synthetic and real data tests we demonstrate that our new approach is useful to achieve spatially varying resolution when used in regions with heterogeneous ray path distribution.

  3. Real-Time Robust Adaptive Modeling and Scheduling for an Electronic Commerce Server

    NASA Astrophysics Data System (ADS)

    Du, Bing; Ruan, Chun

    With the increasing importance and pervasiveness of Internet services, it is becoming a challenge for the proliferation of electronic commerce services to provide performance guarantees under extreme overload. This paper describes a real-time optimization modeling and scheduling approach for performance guarantee of electronic commerce servers. We show that an electronic commerce server may be simulated as a multi-tank system. A robust adaptive server model is subject to unknown additive load disturbances and uncertain model matching. Overload control techniques are based on adaptive admission control to achieve timing guarantees. We evaluate the performance of the model using a complex simulation that is subjected to varying model parameters and massive overload.

  4. Performance of DPSK with convolutional encoding on time-varying fading channels

    NASA Technical Reports Server (NTRS)

    Mui, S. Y.; Modestino, J. W.

    1977-01-01

    The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.

  5. Principles of appendage design in robots and animals determining terradynamic performance on flowable ground.

    PubMed

    Qian, Feifei; Zhang, Tingnan; Korff, Wyatt; Umbanhowar, Paul B; Full, Robert J; Goldman, Daniel I

    2015-10-08

    Natural substrates like sand, soil, leaf litter and snow vary widely in penetration resistance. To search for principles of appendage design in robots and animals that permit high performance on such flowable ground, we developed a ground control technique by which the penetration resistance of a dry granular substrate could be widely and rapidly varied. The approach was embodied in a device consisting of an air fluidized bed trackway in which a gentle upward flow of air through the granular material resulted in a decreased penetration resistance. As the volumetric air flow, Q, increased to the fluidization transition, the penetration resistance decreased to zero. Using a bio-inspired hexapedal robot as a physical model, we systematically studied how locomotor performance (average forward speed, v(x)) varied with ground penetration resistance and robot leg frequency. Average robot speed decreased with increasing Q, and decreased more rapidly for increasing leg frequency, ω. A universal scaling model revealed that the leg penetration ratio (foot pressure relative to penetration force per unit area per depth and leg length) determined v(x) for all ground penetration resistances and robot leg frequencies. To extend our result to include continuous variation of locomotor foot pressure, we used a resistive force theory based terradynamic approach to perform numerical simulations. The terradynamic model successfully predicted locomotor performance for low resistance granular states. Despite variation in morphology and gait, the performance of running lizards, geckos and crabs on flowable ground was also influenced by the leg penetration ratio. In summary, appendage designs which reduce foot pressure can passively maintain minimal leg penetration ratio as the ground weakens, and consequently permits maintenance of effective locomotion over a range of terradynamically challenging surfaces.

  6. Wax and wane of the cross-sectional momentum and contrarian effects: Evidence from the Chinese stock markets

    NASA Astrophysics Data System (ADS)

    Shi, Huai-Long; Zhou, Wei-Xing

    2017-11-01

    This paper investigates the time-varying risk-premium relation of the Chinese stock markets within the framework of cross-sectional momentum and contrarian effects by adopting the Capital Asset Pricing Model and the Fama-French three-factor model. The evolving arbitrage opportunities are also studied by quantifying the performance of time-varying cross-sectional momentum and contrarian effects in the Chinese stock markets. The relation between the contrarian profitability and market condition factors that could characterize the investment context is also investigated. The results reveal that the risk-premium relation varies over time, and the arbitrage opportunities based on the contrarian portfolios wax and wane over time. The performance of contrarian portfolios are highly dependent on several market conditions. The periods with upward trend of market state, higher market volatility and liquidity, lower macroeconomics uncertainty are related to higher contrarian profitability. These findings are consistent with the Adaptive Markets Hypothesis and have practical implications for market participants.

  7. Investigating the effects of the fixed and varying dispersion parameters of Poisson-gamma models on empirical Bayes estimates.

    PubMed

    Lord, Dominique; Park, Peter Young-Jin

    2008-07-01

    Traditionally, transportation safety analysts have used the empirical Bayes (EB) method to improve the estimate of the long-term mean of individual sites; to correct for the regression-to-the-mean (RTM) bias in before-after studies; and to identify hotspots or high risk locations. The EB method combines two different sources of information: (1) the expected number of crashes estimated via crash prediction models, and (2) the observed number of crashes at individual sites. Crash prediction models have traditionally been estimated using a negative binomial (NB) (or Poisson-gamma) modeling framework due to the over-dispersion commonly found in crash data. A weight factor is used to assign the relative influence of each source of information on the EB estimate. This factor is estimated using the mean and variance functions of the NB model. With recent trends that illustrated the dispersion parameter to be dependent upon the covariates of NB models, especially for traffic flow-only models, as well as varying as a function of different time-periods, there is a need to determine how these models may affect EB estimates. The objectives of this study are to examine how commonly used functional forms as well as fixed and time-varying dispersion parameters affect the EB estimates. To accomplish the study objectives, several traffic flow-only crash prediction models were estimated using a sample of rural three-legged intersections located in California. Two types of aggregated and time-specific models were produced: (1) the traditional NB model with a fixed dispersion parameter and (2) the generalized NB model (GNB) with a time-varying dispersion parameter, which is also dependent upon the covariates of the model. Several statistical methods were used to compare the fitting performance of the various functional forms. The results of the study show that the selection of the functional form of NB models has an important effect on EB estimates both in terms of estimated values, weight factors, and dispersion parameters. Time-specific models with a varying dispersion parameter provide better statistical performance in terms of goodness-of-fit (GOF) than aggregated multi-year models. Furthermore, the identification of hazardous sites, using the EB method, can be significantly affected when a GNB model with a time-varying dispersion parameter is used. Thus, erroneously selecting a functional form may lead to select the wrong sites for treatment. The study concludes that transportation safety analysts should not automatically use an existing functional form for modeling motor vehicle crashes without conducting rigorous analyses to estimate the most appropriate functional form linking crashes with traffic flow.

  8. Finite-element simulation of ground-water flow in the vicinity of Yucca Mountain, Nevada-California

    USGS Publications Warehouse

    Czarnecki, J.B.; Waddell, R.K.

    1984-01-01

    A finite-element model of the groundwater flow system in the vicinity of Yucca Mountain at the Nevada Test Site was developed using parameter estimation techniques. The model simulated steady-state ground-water flow occurring in tuffaceous, volcanic , and carbonate rocks, and alluvial aquifers. Hydraulic gradients in the modeled area range from 0.00001 for carbonate aquifers to 0.19 for barriers in tuffaceous rocks. Three model parameters were used in estimating transmissivity in six zones. Simulated hydraulic-head values range from about 1,200 m near Timber Mountain to about 300 m near Furnace Creek Ranch. Model residuals for simulated versus measured hydraulic heads range from -28.6 to 21.4 m; most are less than +/-7 m, indicating an acceptable representation of the hydrologic system by the model. Sensitivity analyses of the model 's flux boundary condition variables were performed to assess the effect of varying boundary fluxes on the calculation of estimated model transmissivities. Varying the flux variables representing discharge at Franklin Lake and Furnace Creek Ranch has greater effect than varying other flux variables. (Author 's abstract)

  9. Wavelet analysis techniques applied to removing varying spectroscopic background in calibration model for pear sugar content

    NASA Astrophysics Data System (ADS)

    Liu, Yande; Ying, Yibin; Lu, Huishan; Fu, Xiaping

    2005-11-01

    A new method is proposed to eliminate the varying background and noise simultaneously for multivariate calibration of Fourier transform near infrared (FT-NIR) spectral signals. An ideal spectrum signal prototype was constructed based on the FT-NIR spectrum of fruit sugar content measurement. The performances of wavelet based threshold de-noising approaches via different combinations of wavelet base functions were compared. Three families of wavelet base function (Daubechies, Symlets and Coiflets) were applied to estimate the performance of those wavelet bases and threshold selection rules by a series of experiments. The experimental results show that the best de-noising performance is reached via the combinations of Daubechies 4 or Symlet 4 wavelet base function. Based on the optimization parameter, wavelet regression models for sugar content of pear were also developed and result in a smaller prediction error than a traditional Partial Least Squares Regression (PLSR) mode.

  10. Performance of Bootstrapping Approaches To Model Test Statistics and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Jonathan; Hancock, Gregory R.

    2001-01-01

    Evaluated the bootstrap method under varying conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Results for the bootstrap suggest the resampling-based method may be conservative in its control over model rejections, thus having an impact on the statistical power associated…

  11. Development of a hybrid modeling approach for predicting intensively managed Douglas-fir growth at multiple scales.

    Treesearch

    A. Weiskittel; D. Maguire; R. Monserud

    2007-01-01

    Hybrid models offer the opportunity to improve future growth projections by combining advantages of both empirical and process-based modeling approaches. Hybrid models have been constructed in several regions and their performance relative to a purely empirical approach has varied. A hybrid model was constructed for intensively managed Douglas-fir plantations in the...

  12. State space modeling of time-varying contemporaneous and lagged relations in connectivity maps.

    PubMed

    Molenaar, Peter C M; Beltz, Adriene M; Gates, Kathleen M; Wilson, Stephen J

    2016-01-15

    Most connectivity mapping techniques for neuroimaging data assume stationarity (i.e., network parameters are constant across time), but this assumption does not always hold true. The authors provide a description of a new approach for simultaneously detecting time-varying (or dynamic) contemporaneous and lagged relations in brain connectivity maps. Specifically, they use a novel raw data likelihood estimation technique (involving a second-order extended Kalman filter/smoother embedded in a nonlinear optimizer) to determine the variances of the random walks associated with state space model parameters and their autoregressive components. The authors illustrate their approach with simulated and blood oxygen level-dependent functional magnetic resonance imaging data from 30 daily cigarette smokers performing a verbal working memory task, focusing on seven regions of interest (ROIs). Twelve participants had dynamic directed functional connectivity maps: Eleven had one or more time-varying contemporaneous ROI state loadings, and one had a time-varying autoregressive parameter. Compared to smokers without dynamic maps, smokers with dynamic maps performed the task with greater accuracy. Thus, accurate detection of dynamic brain processes is meaningfully related to behavior in a clinical sample. Published by Elsevier Inc.

  13. High-resolution time-frequency representation of EEG data using multi-scale wavelets

    NASA Astrophysics Data System (ADS)

    Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina

    2017-09-01

    An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.

  14. Assessing effects of variation in global climate data sets on spatial predictions from climate envelope models

    USGS Publications Warehouse

    Romañach, Stephanie; Watling, James I.; Fletcher, Robert J.; Speroterra, Carolina; Bucklin, David N.; Brandt, Laura A.; Pearlstine, Leonard G.; Escribano, Yesenia; Mazzotti, Frank J.

    2014-01-01

    Climate change poses new challenges for natural resource managers. Predictive modeling of species–environment relationships using climate envelope models can enhance our understanding of climate change effects on biodiversity, assist in assessment of invasion risk by exotic organisms, and inform life-history understanding of individual species. While increasing interest has focused on the role of uncertainty in future conditions on model predictions, models also may be sensitive to the initial conditions on which they are trained. Although climate envelope models are usually trained using data on contemporary climate, we lack systematic comparisons of model performance and predictions across alternative climate data sets available for model training. Here, we seek to fill that gap by comparing variability in predictions between two contemporary climate data sets to variability in spatial predictions among three alternative projections of future climate. Overall, correlations between monthly temperature and precipitation variables were very high for both contemporary and future data. Model performance varied across algorithms, but not between two alternative contemporary climate data sets. Spatial predictions varied more among alternative general-circulation models describing future climate conditions than between contemporary climate data sets. However, we did find that climate envelope models with low Cohen's kappa scores made more discrepant spatial predictions between climate data sets for the contemporary period than did models with high Cohen's kappa scores. We suggest conservation planners evaluate multiple performance metrics and be aware of the importance of differences in initial conditions for spatial predictions from climate envelope models.

  15. Swimming Performance of Toy Robotic Fish

    NASA Astrophysics Data System (ADS)

    Petelina, Nina; Mendelson, Leah; Techet, Alexandra

    2015-11-01

    HEXBUG AquaBotsTM are a commercially available small robot fish that come in a variety of ``species''. These models have varying caudal fin shapes and randomly-varied modes of swimming including forward locomotion, diving, and turning. In this study, we assess the repeatability and performance of the HEXBUG swimming behaviors and discuss the use of these toys to develop experimental techniques and analysis methods to study live fish swimming. In order to determine whether these simple, affordable model fish can be a valid representation for live fish movement, two models, an angelfish and a shark, were studied using 2D Particle Image Velocimetry (PIV) and 3D Synthetic Aperture PIV. In a series of experiments, the robotic fish were either allowed to swim freely or towed in one direction at a constant speed. The resultant measurements of the caudal fin wake are compared to data from previous studies of a real fish and simplified flapping propulsors.

  16. Optical laboratory solution and error model simulation of a linear time-varying finite element equation

    NASA Technical Reports Server (NTRS)

    Taylor, B. K.; Casasent, D. P.

    1989-01-01

    The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.

  17. A simulation of cross-country skiing on varying terrain by using a mathematical power balance model

    PubMed Central

    Moxnes, John F; Sandbakk, Øyvind; Hausken, Kjell

    2013-01-01

    The current study simulated cross-country skiing on varying terrain by using a power balance model. By applying the hypothetical inductive deductive method, we compared the simulated position along the track with actual skiing on snow, and calculated the theoretical effect of friction and air drag on skiing performance. As input values in the model, air drag and friction were estimated from the literature, whereas the model included relationships between heart rate, metabolic rate, and work rate based on the treadmill roller-ski testing of an elite cross-country skier. We verified this procedure by testing four models of metabolic rate against experimental data on the treadmill. The experimental data corresponded well with the simulations, with the best fit when work rate was increased on uphill and decreased on downhill terrain. The simulations predicted that skiing time increases by 3%–4% when either friction or air drag increases by 10%. In conclusion, the power balance model was found to be a useful tool for predicting how various factors influence racing performance in cross-country skiing. PMID:24379718

  18. A simulation of cross-country skiing on varying terrain by using a mathematical power balance model.

    PubMed

    Moxnes, John F; Sandbakk, Oyvind; Hausken, Kjell

    2013-01-01

    The current study simulated cross-country skiing on varying terrain by using a power balance model. By applying the hypothetical inductive deductive method, we compared the simulated position along the track with actual skiing on snow, and calculated the theoretical effect of friction and air drag on skiing performance. As input values in the model, air drag and friction were estimated from the literature, whereas the model included relationships between heart rate, metabolic rate, and work rate based on the treadmill roller-ski testing of an elite cross-country skier. We verified this procedure by testing four models of metabolic rate against experimental data on the treadmill. The experimental data corresponded well with the simulations, with the best fit when work rate was increased on uphill and decreased on downhill terrain. The simulations predicted that skiing time increases by 3%-4% when either friction or air drag increases by 10%. In conclusion, the power balance model was found to be a useful tool for predicting how various factors influence racing performance in cross-country skiing.

  19. Disposable Electronic Cigarettes and Electronic Hookahs: Evaluation of Performance

    PubMed Central

    Williams, Monique; Ghai, Sanjay

    2015-01-01

    Introduction: The purpose of this study was to characterize the performance of disposable button-activated and disposable airflow-activated electronic cigarettes (EC) and electronic hookahs (EH). Methods: The airflow rate required to produce aerosol, pressure drop, and the aerosol absorbance at 420nm were measured during smoke-outs of 9 disposable products. Three units of each product were tested in these experiments. Results: The airflow rates required to produce aerosol and the aerosol absorbances were lower for button-activated models (3mL/s; 0.41–0.55 absorbance) than for airflow-activated models (7–17mL/s; 0.48–0.84 absorbance). Pressure drop was also lower across button-activated products (range = 6–12mm H2O) than airflow-activated products (range = 15–67mm H20). For 25 of 27 units tested, airflow did not have to be increased during smoke-out to maintain aerosol production, unlike earlier generation models. Two brands had uniform performance characteristics for all parameters, while 3 had at least 1 product that did not function normally. While button-activated models lasted 200 puffs or less and EH airflow-activated models often lasted 400 puffs, none of the models produced as many puffs as advertised. Puff number was limited by battery life, which was shorter in button-activated models. Conclusion: The performance of disposable products was differentiated mainly by the way the aerosol was produced (button vs airflow-activated) rather than by product type (EC vs EH). Users needed to take harder drags on airflow-activated models. Performance varied within models, and battery life limited the number of puffs. Data suggest quality control in manufacturing varies among brands. PMID:25104117

  20. Performance Optimizing Multi-Objective Adaptive Control with Time-Varying Model Reference Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2017-01-01

    This paper presents a new adaptive control approach that involves a performance optimization objective. The problem is cast as a multi-objective optimal control. The control synthesis involves the design of a performance optimizing controller from a subset of control inputs. The effect of the performance optimizing controller is to introduce an uncertainty into the system that can degrade tracking of the reference model. An adaptive controller from the remaining control inputs is designed to reduce the effect of the uncertainty while maintaining a notion of performance optimization in the adaptive control system.

  1. A two-phase model for aluminized explosives on the ballistic and brisance performance

    NASA Astrophysics Data System (ADS)

    Kim, Wuhyun; Gwak, Min-cheol; Lee, Young-hun; Yoh, Jack J.

    2018-02-01

    The performance of aluminized high explosives is considered by varying the aluminum (Al) mass fraction in a heterogeneous mixture model. Since the time scales of the characteristic induction and combustion of high explosives and Al particles differ, the process of energy release behind the leading detonation wave front occurs over an extended period of time. For simulating the performance of aluminized explosives with varying Al mass fraction, HMX (1,3,5,7-tetrahexmine-1,3,5,7-tetrazocane) is considered as a base explosive when formulating the multiphase conservation laws of mass, momentum, and energy exchanges between the HMX product gases and Al particles. In the current study, a two-phase model is utilized in order to determine the effects of the Al mass fraction in a condensed phase explosive. First, two types of confined rate stick tests are considered to investigate the detonation velocity and the acceleration ability, which refers to the radial expansion velocity of the confinement shell. The simulation results of the confined rate stick test are compared with the experimental data for the Al mass fraction range of 0%-25%, and the optimal Al mass fraction is provided, which is consistent with the experimental observations. Additionally, a series of plate dent test simulations are conducted, the results of which show the same tendency as those of the experimental tests with varying Al mass fractions.

  2. Impact of multicollinearity on small sample hydrologic regression models

    NASA Astrophysics Data System (ADS)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  3. The Effect of Visual Information on the Manual Approach and Landing

    NASA Technical Reports Server (NTRS)

    Wewerinke, P. H.

    1982-01-01

    The effect of visual information in combination with basic display information on the approach performance. A pre-experimental model analysis was performed in terms of the optimal control model. The resulting aircraft approach performance predictions were compared with the results of a moving base simulator program. The results illustrate that the model provides a meaningful description of the visual (scene) perception process involved in the complex (multi-variable, time varying) manual approach task with a useful predictive capability. The theoretical framework was shown to allow a straight-forward investigation of the complex interaction of a variety of task variables.

  4. Modeling the Performance of Direct-Detection Doppler Lidar Systems in Real Atmospheres

    NASA Technical Reports Server (NTRS)

    McGill, Matthew J.; Hart, William D.; McKay, Jack A.; Spinhirne, James D.

    1999-01-01

    Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems has assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar systems: the double-edge and the multi-channel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only about 10-20% compared to nighttime performance, provided a proper solar filter is included in the instrument design.

  5. Modeling the performance of direct-detection Doppler lidar systems including cloud and solar background variability.

    PubMed

    McGill, M J; Hart, W D; McKay, J A; Spinhirne, J D

    1999-10-20

    Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar system: the double-edge and the multichannel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only approximately 10-20% compared with nighttime performance, provided that a proper solar filter is included in the instrument design.

  6. Frequency-scanning interferometry using a time-varying Kalman filter for dynamic tracking measurements.

    PubMed

    Jia, Xingyu; Liu, Zhigang; Tao, Long; Deng, Zhongwen

    2017-10-16

    Frequency scanning interferometry (FSI) with a single external cavity diode laser (ECDL) and time-invariant Kalman filtering is an effective technique for measuring the distance of a dynamic target. However, due to the hysteresis of the piezoelectric ceramic transducer (PZT) actuator in the ECDL, the optical frequency sweeps of the ECDL exhibit different behaviors, depending on whether the frequency is increasing or decreasing. Consequently, the model parameters of Kalman filter appear time varying in each iteration, which produces state estimation errors with time-invariant filtering. To address this, in this paper, a time-varying Kalman filter is proposed to model the instantaneous movement of a target relative to the different optical frequency tuning durations of the ECDL. The combination of the FSI method with the time-varying Kalman filter was theoretically analyzed, and the simulation and experimental results show the proposed method greatly improves the performance of dynamic FSI measurements.

  7. GOCO05c: A New Combined Gravity Field Model Based on Full Normal Equations and Regionally Varying Weighting

    NASA Astrophysics Data System (ADS)

    Fecher, T.; Pail, R.; Gruber, T.

    2017-05-01

    GOCO05c is a gravity field model computed as a combined solution of a satellite-only model and a global data set of gravity anomalies. It is resolved up to degree and order 720. It is the first model applying regionally varying weighting. Since this causes strong correlations among all gravity field parameters, the resulting full normal equation system with a size of 2 TB had to be solved rigorously by applying high-performance computing. GOCO05c is the first combined gravity field model independent of EGM2008 that contains GOCE data of the whole mission period. The performance of GOCO05c is externally validated by GNSS-levelling comparisons, orbit tests, and computation of the mean dynamic topography, achieving at least the quality of existing high-resolution models. Results show that the additional GOCE information is highly beneficial in insufficiently observed areas, and that due to the weighting scheme of individual data the spectral and spatial consistency of the model is significantly improved. Due to usage of fill-in data in specific regions, the model cannot be used for physical interpretations in these regions.

  8. Quantifying the effect of varying GHG's concentration in Regional Climate Models

    NASA Astrophysics Data System (ADS)

    López-Romero, Jose Maria; Jerez, Sonia; Palacios-Peña, Laura; José Gómez-Navarro, Juan; Jiménez-Guerrero, Pedro; Montavez, Juan Pedro

    2017-04-01

    Regional Climate Models (RCMs) are driven at the boundaries by Global Circulation Models (GCM), and in the particular case of Climate Change projections, such simulations are forced by varying greenhouse gases (GHGs) concentrations. In hindcast simulations driven by reanalysis products, the climate change signal is usually introduced in the assimilation process as well. An interesting question arising in this context is whether GHGs concentrations have to be varied within the RCMs model itself, or rather they should be kept constant. Some groups keep the GHGs concentrations constant under the assumption that information about climate change signal is given throughout the boundaries; sometimes certain radiation parameterization schemes do not permit such changes. Other approaches vary these concentrations arguing that this preserves the physical coherence respect to the driving conditions for the RCM. This work aims to shed light on this topic. For this task, various regional climate simulations with the WRF model for the 1954-2004 period have been carried out for using a Euro-CORDEX compliant domain. A series of simulations with constant and variable GHGs have been performed using both, a GCM (ECHAM6-OM) and a reanalysis product (ERA-20C) data. Results indicate that there exist noticeable differences when introducing varying GHGs concentrations within the RCM domain. The differences in 2-m temperature series between the experiments with varying or constant GHGs concentration strongly depend on the atmospheric conditions, appearing a strong interannual variability. This suggests that short-term experiments are not recommended if the aim is to assess the role of varying GHGs. In addition, and consistently in both GCM and reanalysis-driven experiments, the magnitude of temperature trends, as well as the spatial pattern represented by varying GHGs experiment, are closer to the driving dataset than in experiments keeping constant the GHGs concentration. These results point towards the need for the inclusion of varying GHGs concentration within the RCM itself when dynamically downscaling global datasets, both in GCM and hindcast simulations.

  9. Time-varying delays compensation algorithm for powertrain active damping of an electrified vehicle equipped with an axle motor during regenerative braking

    NASA Astrophysics Data System (ADS)

    Zhang, Junzhi; Li, Yutong; Lv, Chen; Gou, Jinfang; Yuan, Ye

    2017-03-01

    The flexibility of the electrified powertrain system elicits a negative effect upon the cooperative control performance between regenerative and hydraulic braking and the active damping control performance. Meanwhile, the connections among sensors, controllers, and actuators are realized via network communication, i.e., controller area network (CAN), that introduces time-varying delays and deteriorates the control performances of the closed-loop control systems. As such, the goal of this paper is to develop a control algorithm to cope with all these challenges. To this end, the models of the stochastic network induced time-varying delays, based on a real in-vehicle network topology and on a flexible electrified powertrain, were firstly built. In order to further enhance the control performances of active damping and cooperative control of regenerative and hydraulic braking, the time-varying delays compensation algorithm for the electrified powertrain active damping during regenerative braking was developed based on a predictive scheme. The augmented system is constructed and the H∞ performance is analyzed. Based on this analysis, the control gains are derived by solving a nonlinear minimization problem. The simulations and hardware-in-loop (HIL) tests were carried out to validate the effectiveness of the developed algorithm. The test results show that the active damping and cooperative control performances are enhanced significantly.

  10. Visual Predictive Check in Models with Time-Varying Input Function.

    PubMed

    Largajolli, Anna; Bertoldo, Alessandra; Campioni, Marco; Cobelli, Claudio

    2015-11-01

    The nonlinear mixed effects models are commonly used modeling techniques in the pharmaceutical research as they enable the characterization of the individual profiles together with the population to which the individuals belong. To ensure a correct use of them is fundamental to provide powerful diagnostic tools that are able to evaluate the predictive performance of the models. The visual predictive check (VPC) is a commonly used tool that helps the user to check by visual inspection if the model is able to reproduce the variability and the main trend of the observed data. However, the simulation from the model is not always trivial, for example, when using models with time-varying input function (IF). In this class of models, there is a potential mismatch between each set of simulated parameters and the associated individual IF which can cause an incorrect profile simulation. We introduce a refinement of the VPC by taking in consideration a correlation term (the Mahalanobis or normalized Euclidean distance) that helps the association of the correct IF with the individual set of simulated parameters. We investigate and compare its performance with the standard VPC in models of the glucose and insulin system applied on real and simulated data and in a simulated pharmacokinetic/pharmacodynamic (PK/PD) example. The newly proposed VPC performance appears to be better with respect to the standard VPC especially for the models with big variability in the IF where the probability of simulating incorrect profiles is higher.

  11. Validity of the two-level model for Viterbi decoder gap-cycle performance

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Arnold, S.

    1990-01-01

    A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.

  12. BIPAD: A web server for modeling bipartite sequence elements

    PubMed Central

    Bi, Chengpeng; Rogan, Peter K

    2006-01-01

    Background Many dimeric protein complexes bind cooperatively to families of bipartite nucleic acid sequence elements, which consist of pairs of conserved half-site sequences separated by intervening distances that vary among individual sites. Results We introduce the Bipad Server [1], a web interface to predict sequence elements embedded within unaligned sequences. Either a bipartite model, consisting of a pair of one-block position weight matrices (PWM's) with a gap distribution, or a single PWM matrix for contiguous single block motifs may be produced. The Bipad program performs multiple local alignment by entropy minimization and cyclic refinement using a stochastic greedy search strategy. The best models are refined by maximizing incremental information contents among a set of potential models with varying half site and gap lengths. Conclusion The web service generates information positional weight matrices, identifies binding site motifs, graphically represents the set of discovered elements as a sequence logo, and depicts the gap distribution as a histogram. Server performance was evaluated by generating a collection of bipartite models for distinct DNA binding proteins. PMID:16503993

  13. Growth and food consumption by tiger muskellunge: Effects of temperature and ration level on bioenergetic model predictions

    USGS Publications Warehouse

    Chipps, S.R.; Einfalt, L.M.; Wahl, David H.

    2000-01-01

    We measured growth of age-0 tiger muskellunge as a function of ration size (25, 50, 75, and 100% C(max))and water temperature (7.5-25??C) and compared experimental results with those predicted from a bioenergetic model. Discrepancies between actual and predicted values varied appreciably with water temperature and growth rate. On average, model output overestimated winter consumption rates at 10 and 7.5??C by 113 to 328%, respectively, whereas model predictions in summer and autumn (20-25??C) were in better agreement with actual values (4 to 58%). We postulate that variation in model performance was related to seasonal changes in esocid metabolic rate, which were not accounted for in the bioenergetic model. Moreover, accuracy of model output varied with feeding and growth rate of tiger muskellunge. The model performed poorly for fish fed low rations compared with estimates based on fish fed ad libitum rations and was attributed, in part, to the influence of growth rate on the accuracy of bioenergetic predictions. Based on modeling simulations, we found that errors associated with bioenergetic parameters had more influence on model output when growth rate was low, which is consistent with our observations. In addition, reduced conversion efficiency at high ration levels may contribute to variable model performance, thereby implying that waste losses should be modeled as a function of ration size for esocids. Our findings support earlier field tests of the esocid bioenergetic model and indicate that food consumption is generally overestimated by the model, particularly in winter months and for fish exhibiting low feeding and growth rates.

  14. Assessment of a surface-layer parameterization scheme in an atmospheric model for varying meteorological conditions

    NASA Astrophysics Data System (ADS)

    Anurose, T. J.; Bala Subrahamanyam, D.

    2014-06-01

    The performance of a surface-layer parameterization scheme in a high-resolution regional model (HRM) is carried out by comparing the model-simulated sensible heat flux (H) with the concurrent in situ measurements recorded at Thiruvananthapuram (8.5° N, 76.9° E), a coastal station in India. With a view to examining the role of atmospheric stability in conjunction with the roughness lengths in the determination of heat exchange coefficient (CH) and H for varying meteorological conditions, the model simulations are repeated by assigning different values to the ratio of momentum and thermal roughness lengths (i.e. z0m/z0h) in three distinct configurations of the surface-layer scheme designed for the present study. These three configurations resulted in differential behaviour for the varying meteorological conditions, which is attributed to the sensitivity of CH to the bulk Richardson number (RiB) under extremely unstable, near-neutral and stable stratification of the atmosphere.

  15. Sensor trustworthiness in uncertain time varying stochastic environments

    NASA Astrophysics Data System (ADS)

    Verma, Ajay; Fernandes, Ronald; Vadakkeveedu, Kalyan

    2011-06-01

    Persistent surveillance applications require unattended sensors deployed in remote regions to track and monitor some physical stimulant of interest that can be modeled as output of time varying stochastic process. However, the accuracy or the trustworthiness of the information received through a remote and unattended sensor and sensor network cannot be readily assumed, since sensors may get disabled, corrupted, or even compromised, resulting in unreliable information. The aim of this paper is to develop information theory based metric to determine sensor trustworthiness from the sensor data in an uncertain and time varying stochastic environment. In this paper we show an information theory based determination of sensor data trustworthiness using an adaptive stochastic reference sensor model that tracks the sensor performance for the time varying physical feature, and provides a baseline model that is used to compare and analyze the observed sensor output. We present an approach in which relative entropy is used for reference model adaptation and determination of divergence of the sensor signal from the estimated reference baseline. We show that that KL-divergence is a useful metric that can be successfully used in determination of sensor failures or sensor malice of various types.

  16. Neural network submodel as an abstraction tool: relating network performance to combat outcome

    NASA Astrophysics Data System (ADS)

    Jablunovsky, Greg; Dorman, Clark; Yaworsky, Paul S.

    2000-06-01

    Simulation of Command and Control (C2) networks has historically emphasized individual system performance with little architectural context or credible linkage to `bottom- line' measures of combat outcomes. Renewed interest in modeling C2 effects and relationships stems from emerging network intensive operational concepts. This demands improved methods to span the analytical hierarchy between C2 system performance models and theater-level models. Neural network technology offers a modeling approach that can abstract the essential behavior of higher resolution C2 models within a campaign simulation. The proposed methodology uses off-line learning of the relationships between network state and campaign-impacting performance of a complex C2 architecture and then approximation of that performance as a time-varying parameter in an aggregated simulation. Ultimately, this abstraction tool offers an increased fidelity of C2 system simulation that captures dynamic network dependencies within a campaign context.

  17. Estimating thermal performance curves from repeated field observations

    USGS Publications Warehouse

    Childress, Evan; Letcher, Benjamin H.

    2017-01-01

    Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.

  18. Electrochemical energy storage subsystems study, volume 1

    NASA Technical Reports Server (NTRS)

    Miller, F. Q.; Richardson, P. W.; Graff, C. L.; Jordan, M. V.; Patterson, V. L.

    1981-01-01

    The effects on life cycle costs (LCC) of major design and performance technology parameters for multi kW LEO and GEO energy storage subsystems using NiCd and NiH2 batteries and fuel cell/electrolysis cell devices were examined. Design, performance and LCC dynamic models are developed based on mission and system/subsystem requirements and existing or derived physical and cost data relationships. The models define baseline designs and costs. The major design and performance parameters are each varied to determine their influence on LCC around the baseline values.

  19. Electrochemical Energy Storage Subsystems Study, Volume 2

    NASA Technical Reports Server (NTRS)

    Miller, F. Q.; Richardson, P. W.; Graff, C. L.; Jordan, M. V.; Patterson, V. L.

    1981-01-01

    The effects on life cycle costs (LCC) of major design and performance technology parameters for multi kW LEO and GEO energy storage subsystems using NiCd and NiH2 batteries and fuel cell/electrolysis cell devices were examined. Design, performance and LCC dynamic models are developed based on mission and system/subsystem requirements and existing or derived physical and cost data relationships. The models are exercised to define baseline designs and costs. Then the major design and performance parameters are each varied to determine their influence on LCC around the baseline values.

  20. SEIR Model of Rumor Spreading in Online Social Network with Varying Total Population Size

    NASA Astrophysics Data System (ADS)

    Dong, Suyalatu; Deng, Yan-Bin; Huang, Yong-Chang

    2017-10-01

    Based on the infectious disease model with disease latency, this paper proposes a new model for the rumor spreading process in online social network. In this paper what we establish an SEIR rumor spreading model to describe the online social network with varying total number of users and user deactivation rate. We calculate the exact equilibrium points and reproduction number for this model. Furthermore, we perform the rumor spreading process in the online social network with increasing population size based on the original real world Facebook network. The simulation results indicate that the SEIR model of rumor spreading in online social network with changing total number of users can accurately reveal the inherent characteristics of rumor spreading process in online social network. Supported by National Natural Science Foundation of China under Grant Nos. 11275017 and 11173028

  1. The manual control of vehicles undergoing slow transitions in dynamic characteristics

    NASA Technical Reports Server (NTRS)

    Moriarty, T. E.

    1974-01-01

    The manual control was studied of a vehicle with slowly time-varying dynamics to develop analytic and computer techniques necessary for the study of time-varying systems. The human operator is considered as he controls a time-varying plant in which the changes are neither abrupt nor so slow that the time variations are unimportant. An experiment in which pilots controlled the longitudinal mode of a simulated time-varying aircraft is described. The vehicle changed from a pure double integrator to a damped second order system, either instantaneously or smoothly over time intervals of 30, 75, or 120 seconds. The regulator task consisted of trying to null the error term resulting from injected random disturbances with bandwidths of 0.8, 1.4, and 2.0 radians per second. Each of the twelve experimental conditons was replicated ten times. It is shown that the pilot's performance in the time-varying task is essentially equivalent to his performance in stationary tasks which correspond to various points in the transition. A rudimentary model for the pilot-vehicle-regulator is presented.

  2. Microeconomics of advanced process window control for 50-nm gates

    NASA Astrophysics Data System (ADS)

    Monahan, Kevin M.; Chen, Xuemei; Falessi, Georges; Garvin, Craig; Hankinson, Matt; Lev, Amir; Levy, Ady; Slessor, Michael D.

    2002-07-01

    Fundamentally, advanced process control enables accelerated design-rule reduction, but simple microeconomic models that directly link the effects of advanced process control to profitability are rare or non-existent. In this work, we derive these links using a simplified model for the rate of profit generated by the semiconductor manufacturing process. We use it to explain why and how microprocessor manufacturers strive to avoid commoditization by producing only the number of dies required to satisfy the time-varying demand in each performance segment. This strategy is realized using the tactic known as speed binning, the deliberate creation of an unnatural distribution of microprocessor performance that varies according to market demand. We show that the ability of APC to achieve these economic objectives may be limited by variability in the larger manufacturing context, including measurement delays and process window variation.

  3. Generalized linear mixed models with varying coefficients for longitudinal data.

    PubMed

    Zhang, Daowen

    2004-03-01

    The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.

  4. A Comparison of Evaluation Metrics for Biomedical Journals, Articles, and Websites in Terms of Sensitivity to Topic

    PubMed Central

    Fu, Lawrence D.; Aphinyanaphongs, Yindalon; Wang, Lily; Aliferis, Constantin F.

    2011-01-01

    Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed’s clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations. PMID:21419864

  5. Using Video Modeling with Substitutable Loops to Teach Varied Play to Children with Autism

    ERIC Educational Resources Information Center

    Dupere, Sally; MacDonald, Rebecca P. F.; Ahearn, William H.

    2013-01-01

    Children with autism often engage in repetitive play with little variation in the actions performed or items used. This study examined the use of video modeling with scripted substitutable loops on children's pretend play with trained and untrained characters. Three young children with autism were shown a video model of scripted toy play that…

  6. Use of single-well simulators and economic performance criteria to optimize fracturing treatment design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, R.W.; Phillips, A.M.

    1990-02-01

    Low-permeability reservoirs are currently being propped with sand, resin-coated sand, intermediate-density proppants, and bauxite. This wide range of proppant cost and performance has resulted in the proliferation of proppant selection models. Initially, a rather vague relationship between well depth and proppant strength dictated the choice of proppant. More recently, computerized models of varying complexity that use net-present-value (NPV) calculations have become available. The input is based on the operator's performance goals for each well and specific reservoir properties. Simpler, noncomputerized approaches include cost/performance comparisons and nomographs. Each type of model, including several of the computerized models, is examined here. Bymore » use of these models and NPV calculations, optimum fracturing treatment designs have been developed for such low-permeability reservoirs as the Prue in Oklahoma. Typical well conditions are used in each of the selection models, and the results are compared.« less

  7. United States Air Force Summer Faculty Research Program (1983). Technical Report. Volume 2

    DTIC Science & Technology

    1983-12-01

    filters are given below: (1) Inverse filter - Based on the model given in Eq. (2) and the criterion of minimizing the norm (i.e., power ) of the...and compared based on their performances In machine classification under a variety of blur and noise conditions. These filters are analyzed to...criteria based on various assumptions of the Image models* In practice filter performance varies with the type of image, the blur and the noise conditions

  8. Development of a Stochastically-driven, Forward Predictive Performance Model for PEMFCs

    NASA Astrophysics Data System (ADS)

    Harvey, David Benjamin Paul

    A one-dimensional multi-scale coupled, transient, and mechanistic performance model for a PEMFC membrane electrode assembly has been developed. The model explicitly includes each of the 5 layers within a membrane electrode assembly and solves for the transport of charge, heat, mass, species, dissolved water, and liquid water. Key features of the model include the use of a multi-step implementation of the HOR reaction on the anode, agglomerate catalyst sub-models for both the anode and cathode catalyst layers, a unique approach that links the composition of the catalyst layer to key properties within the agglomerate model and the implementation of a stochastic input-based approach for component material properties. The model employs a new methodology for validation using statistically varying input parameters and statistically-based experimental performance data; this model represents the first stochastic input driven unit cell performance model. The stochastic input driven performance model was used to identify optimal ionomer content within the cathode catalyst layer, demonstrate the role of material variation in potential low performing MEA materials, provide explanation for the performance of low-Pt loaded MEAs, and investigate the validity of transient-sweep experimental diagnostic methods.

  9. Robust Control Systems.

    DTIC Science & Technology

    1981-12-01

    time control system algorithms that will perform adequately (i.e., at least maintain closed-loop system stability) when ucertain parameters in the...system design models vary significantly. Such a control algorithm is said to have stability robustness-or more simply is said to be "robust". This...cas6s above, the performance is analyzed using a covariance analysis. The development of all the controllers and the performance analysis algorithms is

  10. The effects of klapskate hinge position on push-off performance: a simulation study.

    PubMed

    Houdijk, Han; Bobbert, Maarten F; De Koning, Jos J; De Groot, Gert

    2003-12-01

    The introduction of the klapskate in speed skating confronts skaters with the question of how to adjust the position of the hinge in order to maximize performance. The purpose of this study was to reveal the constraint that klapskate hinge position imposes on push-off performance in speed skating. For this purpose, a model of the musculoskeletal system was designed to simulate a simplified, two-dimensional skating push off. To capture the essence of a skating push off, this model performed a one-leg vertical jump, from a frictionless surface, while keeping its trunk horizontally. In this model, klapskate hinge position was varied by varying the length of the foot segment between 115 and 300 mm. With each foot length, an optimal control solution was found that resulted in the maximal amount of vertical kinetic and potential energy of the body's center of mass at take off (Weff). Foot length was shown to considerably affect push-off performance. Maximal Weff was obtained with a foot length of 185 mm and decreased by approximately 25% at either foot length of 115 mm and 300 mm. The reason for this decrease was that foot length affected the onset and control of foot rotation. This resulted in a distortion of the pattern of leg segment rotations and affected muscle work (Wmus) and the efficacy ratio (Weff/Wmus) of the entire leg system. Despite its simplicity, the model very well described and explained the effects of klapskate hinge position on push off performance that have been observed in speed-skating experiments. The simplicity of the model, however, does not allow quantitative analyses of optimal klapskate hinge position for speed-skating practice.

  11. Does a pneumotach accurately characterize voice function?

    NASA Astrophysics Data System (ADS)

    Walters, Gage; Krane, Michael

    2016-11-01

    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  12. LPV Modeling and Control for Active Flutter Suppression of a Smart Airfoil

    NASA Technical Reports Server (NTRS)

    Al-Hajjar, Ali M. H.; Al-Jiboory, Ali Khudhair; Swei, Sean Shan-Min; Zhu, Guoming

    2018-01-01

    In this paper, a novel technique of linear parameter varying (LPV) modeling and control of a smart airfoil for active flutter suppression is proposed, where the smart airfoil has a groove along its chord and contains a moving mass that is used to control the airfoil pitching and plunging motions. The new LPV modeling technique is proposed that uses mass position as a scheduling parameter to describe the physical constraint of the moving mass, in addition the hard constraint at the boundaries is realized by proper selection of the parameter varying function. Therefore, the position of the moving mass and the free stream airspeed are considered the scheduling parameters in the study. A state-feedback based LPV gain-scheduling controller with guaranteed H infinity performance is presented by utilizing the dynamics of the moving mass as scheduling parameter at a given airspeed. The numerical simulations demonstrate the effectiveness of the proposed LPV control architecture by significantly improving the performance while reducing the control effort.

  13. Evaluating Curriculum-Based Measurement from a Behavioral Assessment Perspective

    ERIC Educational Resources Information Center

    Ardoin, Scott P.; Roof, Claire M.; Klubnick, Cynthia; Carfolite, Jessica

    2008-01-01

    Curriculum-based measurement Reading (CBM-R) is an assessment procedure used to evaluate students' relative performance compared to peers and to evaluate their growth in reading. Within the response to intervention (RtI) model, CBM-R data are plotted in time series fashion as a means modeling individual students' response to varying levels of…

  14. Impact of a Flexible Evaluation System on Effort and Timing of Study

    ERIC Educational Resources Information Center

    Pacharn, Parunchana; Bay, Darlene; Felton, Sandra

    2012-01-01

    This paper examines results of a flexible grading system that allows each student to influence the weight allocated to each performance measure. We construct a stylized model to determine students' optimal responses. Our analytical model predicts different optimal strategies for students with varying academic abilities: a frontloading strategy for…

  15. Music performance and the perception of key.

    PubMed

    Thompson, W F; Cuddy, L L

    1997-02-01

    The effect of music performance on perceived key movement was examined. Listeners judged key movement in sequences presented without performance expression (mechanical) in Experiment 1 and with performance expression in Experiment 2. Modulation distance varied. Judgments corresponded to predictions based on the cycle of fifths and toroidal models of key relatedness, with the highest correspondence for performed versions with the toroidal model. In Experiment 3, listeners compared mechanical sequences with either performed sequences or modifications of performed sequences. Modifications preserved expressive differences between chords, but not between voices. Predictions from Experiments 1 and 2 held only for performed sequences, suggesting that differences between voices are informative of key movement. Experiment 4 confirmed that modifications did not disrupt musicality. Analyses of performances further suggested a link between performance expression and key.

  16. Comparison of five modelling techniques to predict the spatial distribution and abundance of seabirds

    USGS Publications Warehouse

    O'Connell, Allan F.; Gardner, Beth; Oppel, Steffen; Meirinho, Ana; Ramírez, Iván; Miller, Peter I.; Louzao, Maite

    2012-01-01

    Knowledge about the spatial distribution of seabirds at sea is important for conservation. During marine conservation planning, logistical constraints preclude seabird surveys covering the complete area of interest and spatial distribution of seabirds is frequently inferred from predictive statistical models. Increasingly complex models are available to relate the distribution and abundance of pelagic seabirds to environmental variables, but a comparison of their usefulness for delineating protected areas for seabirds is lacking. Here we compare the performance of five modelling techniques (generalised linear models, generalised additive models, Random Forest, boosted regression trees, and maximum entropy) to predict the distribution of Balearic Shearwaters (Puffinus mauretanicus) along the coast of the western Iberian Peninsula. We used ship transect data from 2004 to 2009 and 13 environmental variables to predict occurrence and density, and evaluated predictive performance of all models using spatially segregated test data. Predicted distribution varied among the different models, although predictive performance varied little. An ensemble prediction that combined results from all five techniques was robust and confirmed the existence of marine important bird areas for Balearic Shearwaters in Portugal and Spain. Our predictions suggested additional areas that would be of high priority for conservation and could be proposed as protected areas. Abundance data were extremely difficult to predict, and none of five modelling techniques provided a reliable prediction of spatial patterns. We advocate the use of ensemble modelling that combines the output of several methods to predict the spatial distribution of seabirds, and use these predictions to target separate surveys assessing the abundance of seabirds in areas of regular use.

  17. Lean Information Management: Criteria For Selecting Key Performance Indicators At Shop Floor

    NASA Astrophysics Data System (ADS)

    Iuga, Maria Virginia; Kifor, Claudiu Vasile; Rosca, Liviu-Ion

    2015-07-01

    Most successful organizations worldwide use key performance indicators as an important part of their corporate strategy in order to forecast, measure and plan their businesses. Performance metrics vary in their purpose, definition and content. Therefore, the way organizations select what they think are the optimal indicators for their businesses varies from company to company, sometimes even from department to department. This study aims to answer the question of what is the most suitable way to define and select key performance indicators. More than that, it identifies the right criteria to select key performance indicators at shop floor level. This paper contributes to prior research by analysing and comparing previously researched selection criteria and proposes an original six-criteria-model, which caters towards choosing the most adequate KPIs. Furthermore, the authors take the research a step further by further steps to closed research gaps within this field of study.

  18. A Flight Dynamics Model for a Small Glider in Ambient Winds

    NASA Technical Reports Server (NTRS)

    Beeler, Scott C.; Moerder, Daniel D.; Cox, David E.

    2003-01-01

    In this paper we describe the equations of motion developed for a point-mass zero-thrust (gliding) aircraft model operating in an environment of spatially varying atmospheric winds. The wind effects are included as an integral part of the flight dynamics equations, and the model is controlled through the three aerodynamic control angles. Formulas for the aerodynamic coefficients for this model are constructed to include the effects of several different aspects contributing to the aerodynamic performance of the vehicle. Characteristic parameter values of the model are compared with those found in a different set of small glider simulations. We execute a set of example problems which solve the glider dynamics equations to find the aircraft trajectory given specified control inputs. The ambient wind conditions and glider characteristics are varied to compare the simulation results under these different circumstances.

  19. A Flight Dynamics Model for a Small Glider in Ambient Winds

    NASA Technical Reports Server (NTRS)

    Beeler, Scott C.; Moerder, Daniel D.; Cox, David E.

    2003-01-01

    In this paper we describe the equations of motion developed for a point-mass zero-thrust (gliding) aircraft model operating in an environment of spatially varying atmospheric winds. The wind effects are included as an integral part of the flight dynamics equations, and the model is controlled through the three aerodynamic control angles. Formulas for the aerodynamic coefficients for this model are constructed to include the effects of several different aspects contributing to the aerodynamic performance of the vehicle. Characteristic parameter values of the model are compared with those found in a different set of small glider simulations. We execute a set of example problems which solve the glider dynamics equations to find aircraft trajectory given specified control inputs. The ambient wind conditions and glider characteristics are varied to compare the simulation results under these different circumstances.

  20. Institutional and Economic Determinants of Public Health System Performance

    PubMed Central

    Mays, Glen P.; McHugh, Megan C.; Shim, Kyumin; Perry, Natalie; Lenaway, Dennis; Halverson, Paul K.; Moonesinghe, Ramal

    2006-01-01

    Objectives. Although a growing body of evidence demonstrates that availability and quality of essential public health services vary widely across communities, relatively little is known about the factors that give rise to these variations. We examined the association of institutional, financial, and community characteristics of local public health delivery systems and the performance of essential services. Methods. Performance measures were collected from local public health systems in 7 states and combined with secondary data sources. Multivariate, linear, and nonlinear regression models were used to estimate associations between system characteristics and the performance of essential services. Results. Performance varied significantly with the size, financial resources, and organizational structure of local public health systems, with some public health services appearing more sensitive to these characteristics than others. Staffing levels and community characteristics also appeared to be related to the performance of selected services. Conclusions. Reconfiguring the organization and financing of public health systems in some communities—such as through consolidation and enhanced intergovernmental coordination—may hold promise for improving the performance of essential services. PMID:16449584

  1. Particle-size distribution models for the conversion of Chinese data to FAO/USDA system.

    PubMed

    Shangguan, Wei; Dai, YongJiu; García-Gutiérrez, Carlos; Yuan, Hua

    2014-01-01

    We investigated eleven particle-size distribution (PSD) models to determine the appropriate models for describing the PSDs of 16349 Chinese soil samples. These data are based on three soil texture classification schemes, including one ISSS (International Society of Soil Science) scheme with four data points and two Katschinski's schemes with five and six data points, respectively. The adjusted coefficient of determination r (2), Akaike's information criterion (AIC), and geometric mean error ratio (GMER) were used to evaluate the model performance. The soil data were converted to the USDA (United States Department of Agriculture) standard using PSD models and the fractal concept. The performance of PSD models was affected by soil texture and classification of fraction schemes. The performance of PSD models also varied with clay content of soils. The Anderson, Fredlund, modified logistic growth, Skaggs, and Weilbull models were the best.

  2. Two Unipolar Terminal-Attractor-Based Associative Memories

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang; Wu, Chwan-Hwa

    1995-01-01

    Two unipolar mathematical models of electronic neural network functioning as terminal-attractor-based associative memory (TABAM) developed. Models comprise sets of equations describing interactions between time-varying inputs and outputs of neural-network memory, regarded as dynamical system. Simplifies design and operation of optoelectronic processor to implement TABAM performing associative recall of images. TABAM concept described in "Optoelectronic Terminal-Attractor-Based Associative Memory" (NPO-18790). Experimental optoelectronic apparatus that performed associative recall of binary images described in "Optoelectronic Inner-Product Neural Associative Memory" (NPO-18491).

  3. Performance prediction of high Tc superconducting small antennas using a two-fluid-moment method model

    NASA Astrophysics Data System (ADS)

    Cook, G. G.; Khamas, S. K.; Kingsley, S. P.; Woods, R. C.

    1992-01-01

    The radar cross section and Q factors of electrically small dipole and loop antennas made with a YBCO high Tc superconductor are predicted using a two-fluid-moment method model, in order to determine the effects of finite conductivity on the performances of such antennas. The results compare the useful operating bandwidths of YBCO antennas exhibiting varying degrees of impurity with their copper counterparts at 77 K, showing a linear relationship between bandwidth and impurity level.

  4. Problem reporting management system performance simulation

    NASA Technical Reports Server (NTRS)

    Vannatta, David S.

    1993-01-01

    This paper proposes the Problem Reporting Management System (PRMS) model as an effective discrete simulation tool that determines the risks involved during the development phase of a Trouble Tracking Reporting Data Base replacement system. The model considers the type of equipment and networks which will be used in the replacement system as well as varying user loads, size of the database, and expected operational availability. The paper discusses the dynamics, stability, and application of the PRMS and addresses suggested concepts to enhance the service performance and enrich them.

  5. Modeling the Effects of Transmission Type, Gear Count and Ratio Spread on Fuel Economy and Performance Using ALPHA (SAE 2016-01-1143)

    EPA Science Inventory

    This paper presents an analysis of the effects of varying the absolute and relative gear ratios of a given transmission on fuel economy and performance, considers alternative methods of selecting absolute gear ratios, examines the effect of alternative engines on the selections o...

  6. Effects of Cluster Location on Human Performance on the Traveling Salesperson Problem

    ERIC Educational Resources Information Center

    MacGregor, James N.

    2013-01-01

    Most models of human performance on the traveling salesperson problem involve clustering of nodes, but few empirical studies have examined effects of clustering in the stimulus array. A recent exception varied degree of clustering and concluded that the more clustered a stimulus array, the easier a TSP is to solve (Dry, Preiss, & Wagemans,…

  7. Structural nested mean models for assessing time-varying effect moderation.

    PubMed

    Almirall, Daniel; Ten Have, Thomas; Murphy, Susan A

    2010-03-01

    This article considers the problem of assessing causal effect moderation in longitudinal settings in which treatment (or exposure) is time varying and so are the covariates said to moderate its effect. Intermediate causal effects that describe time-varying causal effects of treatment conditional on past covariate history are introduced and considered as part of Robins' structural nested mean model. Two estimators of the intermediate causal effects, and their standard errors, are presented and discussed: The first is a proposed two-stage regression estimator. The second is Robins' G-estimator. The results of a small simulation study that begins to shed light on the small versus large sample performance of the estimators, and on the bias-variance trade-off between the two estimators are presented. The methodology is illustrated using longitudinal data from a depression study.

  8. Drop Hammer Tests with Three Oleo Strut Models and Three Different Shock Strut Oils at Low Temperatures

    NASA Technical Reports Server (NTRS)

    Kranz, M

    1954-01-01

    Drop hammer tests with different shock strut models and shock strut oils were performed at temperatures ranging to -40 C. The various shock strut models do not differ essentially regarding their springing and damping properties at low temperatures; however, the influence of the different shock strut oils on the springing properties at low temperatures varies greatly.

  9. The effect of model resolution in predicting meteorological parameters used in fire danger rating.

    Treesearch

    Jeanne L. Hoadley; Ken Westrick; Sue A. Ferguson; Scott L. Goodrick; Larry Bradshaw; Paul Werth

    2004-01-01

    Previous studies of model performance at varying resolutions have focused on winter storms or isolated convective events. Little attention has been given to the static high pressure situations that may lead to severe wildfire outbreaks. This study focuses on such an event so as to evaluate the value of increased model resolution for prediction of fire danger. The...

  10. A data-centric approach to understanding the pricing of financial options

    NASA Astrophysics Data System (ADS)

    Healy, J.; Dixon, M.; Read, B.; Cai, F. F.

    2002-05-01

    We investigate what can be learned from a purely phenomenological study of options prices without modelling assumptions. We fitted neural net (NN) models to LIFFE ``ESX'' European style FTSE 100 index options using daily data from 1992 to 1997. These non-parametric models reproduce the Black-Scholes (BS) analytic model in terms of fit and performance measures using just the usual five inputs (S, X, t, r, IV). We found that adding transaction costs (bid-ask spread) to these standard five parameters gives a comparable fit and performance. Tests show that the bid-ask spread can be a statistically significant explanatory variable for option prices. The difference in option prices between the models with transaction costs and those without ranges from about -3.0 to +1.5 index points, varying with maturity date. However, the difference depends on the moneyness (S/X), being greatest in-the-money. This suggests that use of a five-factor model can result in a pricing difference of up to #10 to #30 per call option contract compared with modelling under transaction costs. We found that the influence of transaction costs varied between different yearly subsets of the data. Open interest is also a significant explanatory variable, but volume is not.

  11. Factors influencing visual search in complex driving environments.

    DOT National Transportation Integrated Search

    2016-10-01

    The objective of this study was to describe and model the effects of varied roadway environment factors on drivers perceived complexity, with the goal of further understanding conditions for optimal driver behavior and performance. This was invest...

  12. Analyses of ACPL thermal/fluid conditioning system

    NASA Technical Reports Server (NTRS)

    Stephen, L. A.; Usher, L. H.

    1976-01-01

    Results of engineering analyses are reported. Initial computations were made using a modified control transfer function where the systems performance was characterized parametrically using an analytical model. The analytical model was revised to represent the latest expansion chamber fluid manifold design, and systems performance predictions were made. Parameters which were independently varied in these computations are listed. Systems predictions which were used to characterize performance are primarily transient computer plots comparing the deviation between average chamber temperature and the chamber temperature requirement. Additional computer plots were prepared. Results of parametric computations with the latest fluid manifold design are included.

  13. Comparison of different Kalman filter approaches in deriving time varying connectivity from EEG data.

    PubMed

    Ghumare, Eshwar; Schrooten, Maarten; Vandenberghe, Rik; Dupont, Patrick

    2015-08-01

    Kalman filter approaches are widely applied to derive time varying effective connectivity from electroencephalographic (EEG) data. For multi-trial data, a classical Kalman filter (CKF) designed for the estimation of single trial data, can be implemented by trial-averaging the data or by averaging single trial estimates. A general linear Kalman filter (GLKF) provides an extension for multi-trial data. In this work, we studied the performance of the different Kalman filtering approaches for different values of signal-to-noise ratio (SNR), number of trials and number of EEG channels. We used a simulated model from which we calculated scalp recordings. From these recordings, we estimated cortical sources. Multivariate autoregressive model parameters and partial directed coherence was calculated for these estimated sources and compared with the ground-truth. The results showed an overall superior performance of GLKF except for low levels of SNR and number of trials.

  14. Simulation of Cold Flow in a Truncated Ideal Nozzle with Film Cooling

    NASA Technical Reports Server (NTRS)

    Braman, K. E.; Ruf, J. H.

    2015-01-01

    Flow transients during rocket start-up and shut-down can lead to significant side loads on rocket nozzles. The capability to estimate these side loads computationally can streamline the nozzle design process. Towards this goal, the flow in a truncated ideal contour (TIC) nozzle has been simulated using RANS and URANS for a range of nozzle pressure ratios (NPRs) aimed to match a series of cold flow experiments performed at the NASA MSFC Nozzle Test Facility. These simulations were performed with varying turbulence model choices and for four approximations of the supersonic film injection geometry, each of which was created with a different simplification of the test article geometry. The results show that although a reasonable match to experiment can be obtained with varying levels of geometric fidelity, the modeling choices made do not fully represent the physics of flow separation in a TIC nozzle with film cooling.

  15. Aspect Ratio of Receiver Node Geometry based Indoor WLAN Propagation Model

    NASA Astrophysics Data System (ADS)

    Naik, Udaykumar; Bapat, Vishram N.

    2017-08-01

    This paper presents validation of indoor wireless local area network (WLAN) propagation model for varying rectangular receiver node geometry. The rectangular client node configuration is a standard node arrangement in computer laboratories of academic institutes and research organizations. The model assists to install network nodes for the better signal coverage. The proposed model is backed by wide ranging real time received signal strength measurements at 2.4 GHz. The shadow fading component of signal propagation under realistic indoor environment is modelled with the dependency on varying aspect ratio of the client node geometry. The developed new model is useful in predicting indoor path loss for IEEE 802.11b/g WLAN. The new model provides better performance in comparison to well known International Telecommunication Union and free space propagation models. It is shown that the proposed model is simple and can be a useful tool for indoor WLAN node deployment planning and quick method for the best utilisation of the office space.

  16. A Comparative Study of High and Low Fidelity Fan Models for Turbofan Engine System Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Afjeh, Abdollah A.

    1991-01-01

    In this paper, a heterogeneous propulsion system simulation method is presented. The method is based on the formulation of a cycle model of a gas turbine engine. The model includes the nonlinear characteristics of the engine components via use of empirical data. The potential to simulate the entire engine operation on a computer without the aid of data is demonstrated by numerically generating "performance maps" for a fan component using two flow models of varying fidelity. The suitability of the fan models were evaluated by comparing the computed performance with experimental data. A discussion of the potential benefits and/or difficulties in connecting simulations solutions of differing fidelity is given.

  17. Generating survival times to simulate Cox proportional hazards models with time-varying covariates.

    PubMed

    Austin, Peter C

    2012-12-20

    Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data-generating process: one must be able to simulate data from a specified statistical model. We describe data-generating processes for the Cox proportional hazards model with time-varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time-varying covariates: first, a dichotomous time-varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time-varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time-varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed-form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time-invariant covariates and to a single time-varying covariate. We illustrate the utility of our closed-form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time-varying covariates. This is compared with the statistical power to detect as statistically significant a binary time-invariant covariate. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Adaptive Missile Flight Control for Complex Aerodynamic Phenomena

    DTIC Science & Technology

    2017-08-09

    at high maneuvering conditions motivate guidance approaches that can accommodate uncertainty. Flight control algorithms are one component...performance, but system uncertainty is not directly addressed. Linear, parameter-varying37,38 approaches for munitions expand on optimal control by... post -canard stall. We propose to model these complex aerodynamic mechanisms and use these models in formulating flight controllers within the

  19. The Impact of Five Missing Data Treatments on a Cross-Classified Random Effects Model

    ERIC Educational Resources Information Center

    Hoelzle, Braden R.

    2012-01-01

    The present study compared the performance of five missing data treatment methods within a Cross-Classified Random Effects Model environment under various levels and patterns of missing data given a specified sample size. Prior research has shown the varying effect of missing data treatment options within the context of numerous statistical…

  20. Female Faculty Role Models and Student Outcomes: A Caveat about Aggregation

    ERIC Educational Resources Information Center

    Johnson, Iryna Y.

    2014-01-01

    The idea that female faculty might serve as role models for female students has led to studies of the effect of female faculty on female student performance. Due to varying levels of aggregation of the measure of student exposure to female faculty--percentage of female faculty at an institution or department, percentage of classes taught by…

  1. Comparing observer models and feature selection methods for a task-based statistical assessment of digital breast tomsynthesis in reconstruction space

    NASA Astrophysics Data System (ADS)

    Park, Subok; Zhang, George Z.; Zeng, Rongping; Myers, Kyle J.

    2014-03-01

    A task-based assessment of image quality1 for digital breast tomosynthesis (DBT) can be done in either the projected or reconstructed data space. As the choice of observer models and feature selection methods can vary depending on the type of task and data statistics, we previously investigated the performance of two channelized- Hotelling observer models in conjunction with 2D Laguerre-Gauss (LG) and two implementations of partial least squares (PLS) channels along with that of the Hotelling observer in binary detection tasks involving DBT projections.2, 3 The difference in these observers lies in how the spatial correlation in DBT angular projections is incorporated in the observer's strategy to perform the given task. In the current work, we extend our method to the reconstructed data space of DBT. We investigate how various model observers including the aforementioned compare for performing the binary detection of a spherical signal embedded in structured breast phantoms with the use of DBT slices reconstructed via filtered back projection. We explore how well the model observers incorporate the spatial correlation between different numbers of reconstructed DBT slices while varying the number of projections. For this, relatively small and large scan angles (24° and 96°) are used for comparison. Our results indicate that 1) given a particular scan angle, the number of projections needed to achieve the best performance for each observer is similar across all observer/channel combinations, i.e., Np = 25 for scan angle 96° and Np = 13 for scan angle 24°, and 2) given these sufficient numbers of projections, the number of slices for each observer to achieve the best performance differs depending on the channel/observer types, which is more pronounced in the narrow scan angle case.

  2. Experimental study: Underwater propagation of polarized flat top partially coherent laser beams with a varying degree of spatial coherence

    NASA Astrophysics Data System (ADS)

    Avramov-Zamurovic, S.; Nelson, C.

    2018-10-01

    We report on experiments where spatially partially coherent laser beams with flat top intensity profiles were propagated underwater. Two scenarios were explored: still water and mechanically moved entrained salt scatterers. Gaussian, fully spatially coherent beams, and Multi-Gaussian Schell model beams with varying degrees of spatial coherence were used in the experiments. The main objective of our study was the exploration of the scintillation performance of scalar beams, with both vertical and horizontal polarizations, and the comparison with electromagnetic beams that have a randomly varying polarization. The results from our investigation show up to a 50% scintillation index reduction for the case with electromagnetic beams. In addition, we observed that the fully coherent beam performance deteriorates significantly relative to the spatially partially coherent beams when the conditions become more complex, changing from still water conditions to the propagation through mechanically moved entrained salt scatterers.

  3. Effects of aquatic exercises in a rat model of brainstem demyelination with ethidium bromide on the beam walking test.

    PubMed

    Nassar, Cíntia Cristina Souza; Bondan, Eduardo Fernandes; Alouche, Sandra Regina

    2009-09-01

    Multiple sclerosis is a demyelinating disease of the central nervous system associated with varied levels of disability. The impact of early physiotherapeutic interventions in the disease progression is unknown. We used an experimental model of demyelination with the gliotoxic agent ethidium bromide and early aquatic exercises to evaluate the motor performance of the animals. We quantified the number of footsteps and errors during the beam walking test. The demyelinated animals walked fewer steps with a greater number of errors than the control group. The demyelinated animals that performed aquatic exercises presented a better motor performance than those that did not exercise. Therefore aquatic exercising was beneficial to the motor performance of rats in this experimental model of demyelination.

  4. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  5. Why We Should Not Be Indifferent to Specification Choices for Difference-in-Differences.

    PubMed

    Ryan, Andrew M; Burgess, James F; Dimick, Justin B

    2015-08-01

    To evaluate the effects of specification choices on the accuracy of estimates in difference-in-differences (DID) models. Process-of-care quality data from Hospital Compare between 2003 and 2009. We performed a Monte Carlo simulation experiment to estimate the effect of an imaginary policy on quality. The experiment was performed for three different scenarios in which the probability of treatment was (1) unrelated to pre-intervention performance; (2) positively correlated with pre-intervention levels of performance; and (3) positively correlated with pre-intervention trends in performance. We estimated alternative DID models that varied with respect to the choice of data intervals, the comparison group, and the method of obtaining inference. We assessed estimator bias as the mean absolute deviation between estimated program effects and their true value. We evaluated the accuracy of inferences through statistical power and rates of false rejection of the null hypothesis. Performance of alternative specifications varied dramatically when the probability of treatment was correlated with pre-intervention levels or trends. In these cases, propensity score matching resulted in much more accurate point estimates. The use of permutation tests resulted in lower false rejection rates for the highly biased estimators, but the use of clustered standard errors resulted in slightly lower false rejection rates for the matching estimators. When treatment and comparison groups differed on pre-intervention levels or trends, our results supported specifications for DID models that include matching for more accurate point estimates and models using clustered standard errors or permutation tests for better inference. Based on our findings, we propose a checklist for DID analysis. © Health Research and Educational Trust.

  6. Numerical Performance Prediction of a Miniature Ramjet at Mach 4

    DTIC Science & Technology

    2012-09-01

    with the computational fluids dynamic (CFD) code from ANSYS - CFX . The nozzle-throat area was varied to increase the backpressure and this pushed the...normal shock that was sitting within the inlet, out to the lip of the inlet cowl. Using the eddy dissipation combustion model in ANSYS - CFX , a...improved accuracy in turbulence modeling. 14. SUBJECT TERMS Mach 4, Ramjet, Drag, Turbulence Modeling, Simulation, ANSYS CFX 15. NUMBER

  7. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

    ERIC Educational Resources Information Center

    Liu, Boquan; Polce, Evan; Sprott, Julien C.; Jiang, Jack J.

    2018-01-01

    Purpose: The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Study Design: Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100…

  8. A comparison of evaluation metrics for biomedical journals, articles, and websites in terms of sensitivity to topic.

    PubMed

    Fu, Lawrence D; Aphinyanaphongs, Yindalon; Wang, Lily; Aliferis, Constantin F

    2011-08-01

    Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed's clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic-adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Understanding Lymphatic Valve Function via Computational Modeling

    NASA Astrophysics Data System (ADS)

    Wolf, Ki; Nepiyushchikh, Zhanna; Razavi, Mohammad; Dixon, Brandon; Alexeev, Alexander

    2017-11-01

    The lymphatic system is a crucial part to the circulatory system with many important functions, such as transport of interstitial fluid, fatty acid, and immune cells. Lymphatic vessels' contractile walls and valves allow lymph flow against adverse pressure gradients and prevent back flow. Yet, the effect of lymphatic valves' geometric and mechanical properties to pumping performance and lymphatic dysfunctions like lymphedema is not well understood. Our coupled fluid-solid computational model based on lattice Boltzmann model and lattice spring model investigates the dynamics and effectiveness of lymphatic valves in resistance minimization, backflow prevention, and viscoelastic response under different geometric and mechanical properties, suggesting the range of lymphatic valve parameters with effective pumping performance. Our model also provides more physiologically relevant relations of the valve response under varied conditions to a lumped parameter model of the lymphatic system giving an integrative insight into lymphatic system performance, including its failure due to diseases. NSF CMMI-1635133.

  10. On improving the performance of nonphotochemical quenching in CP29 light-harvesting antenna complex

    NASA Astrophysics Data System (ADS)

    Berman, Gennady P.; Nesterov, Alexander I.; Sayre, Richard T.; Still, Susanne

    2016-03-01

    We model and simulate the performance of charge-transfer in nonphotochemical quenching (NPQ) in the CP29 light-harvesting antenna-complex associated with photosystem II (PSII). The model consists of five discrete excitonic energy states and two sinks, responsible for the potentially damaging processes and charge-transfer channels, respectively. We demonstrate that by varying (i) the parameters of the chlorophyll-based dimer, (ii) the resonant properties of the protein-solvent environment interaction, and (iii) the energy transfer rates to the sinks, one can significantly improve the performance of the NPQ. Our analysis suggests strategies for improving the performance of the NPQ in response to environmental changes, and may stimulate experimental verification.

  11. Transferable Discharge Permit Trading Under Varying Stream Conditions: A Simulation of Multiperiod Permit Market Performance on the Fox River, Wisconsin

    NASA Astrophysics Data System (ADS)

    O'Neil, William B.

    1983-06-01

    The state of Wisconsin has recently established the legislative basis for what may be the first, operating water-pollution permit market in the United States. The efficient properties of such markets have been discussed widely in the theoretical literature, but little empirical work has been published regarding the potential cost savings attainable in specific situations. This paper describes part of the empirical analysis that supported the creation of a transferable discharge permit (TDP) market on the Fox River in Wisconsin. A multiperiod water quality planning model is developed to illustrate the performance of a TDP market under conditions of varying stream flow and temperature. The model is applied to the case of the Fox River and is used to compare the cost of achieving target water quality levels under conventional regulatory rules with the cost associated with operation of a TDP market. In addition to the cost estimates, the simulation of market performance yields information on the probable pattern of trading that may occur in the Fox River TDP market.

  12. Changing head model extent affects finite element predictions of transcranial direct current stimulation distributions

    NASA Astrophysics Data System (ADS)

    Indahlastari, Aprinda; Chauhan, Munish; Schwartz, Benjamin; Sadleir, Rosalind J.

    2016-12-01

    Objective. In this study, we determined efficient head model sizes relative to predicted current densities in transcranial direct current stimulation (tDCS). Approach. Efficiency measures were defined based on a finite element (FE) simulations performed using nine human head models derived from a single MRI data set, having extents varying from 60%-100% of the original axial range. Eleven tissue types, including anisotropic white matter, and three electrode montages (T7-T8, F3-right supraorbital, Cz-Oz) were used in the models. Main results. Reducing head volume extent from 100% to 60%, that is, varying the model’s axial range from between the apex and C3 vertebra to one encompassing only apex to the superior cerebellum, was found to decrease the total modeling time by up to half. Differences between current density predictions in each model were quantified by using a relative difference measure (RDM). Our simulation results showed that {RDM} was the least affected (a maximum of 10% error) for head volumes modeled from the apex to the base of the skull (60%-75% volume). Significance. This finding suggested that the bone could act as a bioelectricity boundary and thus performing FE simulations of tDCS on the human head with models extending beyond the inferior skull may not be necessary in most cases to obtain reasonable precision in current density results.

  13. Application of extremum seeking for time-varying systems to resonance control of RF cavities

    DOE PAGES

    Scheinker, Alexander

    2016-09-13

    A recently developed form of extremum seeking for time-varying systems is implemented in hardware for the resonance control of radio-frequency cavities without phase measurements. Normal conducting RF cavity resonance control is performed via a slug tuner, while superconducting TESLA-type cavity resonance control is performed via piezo actuators. The controller maintains resonance by minimizing reflected power by utilizing model-independent adaptive feedback. Unlike standard phase-measurement-based resonance control, the presented approach is not sensitive to arbitrary phase shifts of the RF signals due to temperature-dependent cable length or phasemeasurement hardware changes. The phase independence of this method removes common slowly varying drifts andmore » required periodic recalibration of phase-based methods. A general overview of the adaptive controller is presented along with the proof of principle experimental results at room temperature. Lastly, this method allows us to both maintain a cavity at a desired resonance frequency and also to dynamically modify its resonance frequency to track the unknown time-varying frequency of an RF source, thereby maintaining maximal cavity field strength, based only on power-level measurements.« less

  14. Age-class separation of blue-winged ducks

    USGS Publications Warehouse

    Hohman, W.L.; Moore, J.L.; Twedt, D.J.; Mensik, John G.; Logerwell, E.

    1995-01-01

    Accurate determination of age is of fundamental importance to population and life history studies of waterfowl and their management. Therefore, we developed quantitative methods that separate adult and immature blue-winged teal (Anas discors), cinnamon teal (A. cyanoptera), and northern shovelers (A. clypeata) during spring and summer. To assess suitability of discriminant models using 9 remigial measurements, we compared model performance (% agreement between predicted age and age assigned to birds on the basis of definitive cloacal or rectral feather characteristics) in different flyways (Mississippi and Pacific) and between years (1990-91 and 1991-92). We also applied age-classification models to wings obtained from U.S. Fish and Wildlife Service harvest surveys in the Mississippi and Central-Pacific flyways (wing-bees) for which age had been determined using qualitative characteristics (i.e., remigial markings, shape, or wear). Except for male northern shovelers, models correctly aged lt 90% (range 70-86%) of blue-winged ducks. Model performance varied among species and differed between sexes and years. Proportions of individuals that were correctly aged were greater for males (range 63-86%) than females (range 39-69%). Models for northern shovelers performed better in flyway comparisons within year (1991-92, La. model applied to Calif. birds, and Calif. model applied to La. birds: 90 and 94% for M, and 89 and 76% for F, respectively) than in annual comparisons within the Mississippi Flyway (1991-92 model applied to 1990-91 data: 79% for M, 50% for F). Exclusion of measurements that varied by flyway or year did not improve model performance. Quantitative methods appear to be of limited value for age separation of female blue-winged ducks. Close agreement between predicted age and age assigned to wings from the wing-bees suggests that qualitative and quantitative methods may be equally accurate for age separation of male blue-winged ducks. We interpret annual and flyway differences in remigial measurements and reduced performance of age classification models as evidence of high variability in size of blue-winged ducks' remiges. Variability in remigial size of these and other small-bodied waterfowl may be related to nutrition during molt.

  15. Numerical investigations of rib fracture failure models in different dynamic loading conditions.

    PubMed

    Wang, Fang; Yang, Jikuang; Miller, Karol; Li, Guibing; Joldes, Grand R; Doyle, Barry; Wittek, Adam

    2016-01-01

    Rib fracture is one of the most common thoracic injuries in vehicle traffic accidents that can result in fatalities associated with seriously injured internal organs. A failure model is critical when modelling rib fracture to predict such injuries. Different rib failure models have been proposed in prediction of thorax injuries. However, the biofidelity of the fracture failure models when varying the loading conditions and the effects of a rib fracture failure model on prediction of thoracic injuries have been studied only to a limited extent. Therefore, this study aimed to investigate the effects of three rib failure models on prediction of thoracic injuries using a previously validated finite element model of the human thorax. The performance and biofidelity of each rib failure model were first evaluated by modelling rib responses to different loading conditions in two experimental configurations: (1) the three-point bending on the specimen taken from rib and (2) the anterior-posterior dynamic loading to an entire bony part of the rib. Furthermore, the simulation of the rib failure behaviour in the frontal impact to an entire thorax was conducted at varying velocities and the effects of the failure models were analysed with respect to the severity of rib cage damages. Simulation results demonstrated that the responses of the thorax model are similar to the general trends of the rib fracture responses reported in the experimental literature. However, they also indicated that the accuracy of the rib fracture prediction using a given failure model varies for different loading conditions.

  16. Initial Investigation of the Acoustics of a Counter-Rotating Open Rotor Model with Historical Baseline Blades in a Low-Speed Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Elliott, David M.

    2012-01-01

    A counter-rotating open rotor scale model was tested in the NASA Glenn Research Center 9- by 15-Foot Low-Speed Wind Tunnel (LSWT). This model used a historical baseline blade set with which modern blade designs will be compared against on an acoustic and aerodynamic performance basis. Different blade pitch angles simulating approach and takeoff conditions were tested, along with angle-of-attack configurations. A configuration was also tested in order to determine the acoustic effects of a pylon. The shaft speed was varied for each configuration in order to get data over a range of operability. The freestream Mach number was also varied for some configurations. Sideline acoustic data were taken for each of these test configurations.

  17. Robust distributed model predictive control of linear systems with structured time-varying uncertainties

    NASA Astrophysics Data System (ADS)

    Zhang, Langwen; Xie, Wei; Wang, Jingcheng

    2017-11-01

    In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.

  18. East meets West: the influence of racial, ethnic and cultural risk factors on cardiac surgical risk model performance.

    PubMed

    Soo-Hoo, Sarah; Nemeth, Samantha; Baser, Onur; Argenziano, Michael; Kurlansky, Paul

    2018-01-01

    To explore the impact of racial and ethnic diversity on the performance of cardiac surgical risk models, the Chinese SinoSCORE was compared with the Society of Thoracic Surgeons (STS) risk model in a diverse American population. The SinoSCORE risk model was applied to 13 969 consecutive coronary artery bypass surgery patients from twelve American institutions. SinoSCORE risk factors were entered into a logistic regression to create a 'derived' SinoSCORE whose performance was compared with that of the STS risk model. Observed mortality was 1.51% (66% of that predicted by STS model). The SinoSCORE 'low-risk' group had a mortality of 0.15%±0.04%, while the medium-risk and high-risk groups had mortalities of 0.35%±0.06% and 2.13%±0.14%, respectively. The derived SinoSCORE model had a relatively good discrimination (area under of the curve (AUC)=0.785) compared with that of the STS risk score (AUC=0.811; P=0.18 comparing the two). However, specific factors that were significant in the original SinoSCORE but that lacked significance in our derived model included body mass index, preoperative atrial fibrillation and chronic obstructive pulmonary disease. SinoSCORE demonstrated limited discrimination when applied to an American population. The derived SinoSCORE had a discrimination comparable with that of the STS, suggesting underlying similarities of physiological substrate undergoing surgery. However, differential influence of various risk factors suggests that there may be varying degrees of importance and interactions between risk factors. Clinicians should exercise caution when applying risk models across varying populations due to potential differences that racial, ethnic and geographic factors may play in cardiac disease and surgical outcomes.

  19. The Performance of ML, GLS, and WLS Estimation in Structural Equation Modeling under Conditions of Misspecification and Nonnormality.

    ERIC Educational Resources Information Center

    Olsson, Ulf Henning; Foss, Tron; Troye, Sigurd V.; Howell, Roy D.

    2000-01-01

    Used simulation to demonstrate how the choice of estimation method affects indexes of fit and parameter bias for different sample sizes when nested models vary in terms of specification error and the data demonstrate different levels of kurtosis. Discusses results for maximum likelihood (ML), generalized least squares (GLS), and weighted least…

  20. Using periodic line fires to gain a new perspective on multi-dimensional aspects of forward fire spread

    Treesearch

    R. R. Linn; J. M. Canfield; P. Cunningham; C. Edminster; J.-L. Dupuy; F. Pimont

    2012-01-01

    This study was conducted to increase understanding of possible roles and importance of local threedimensionality in the forward spread of wildfire models. A suite of simulations was performed using a coupled atmosphere-fire model, HIGRAD/FIRETEC, consisting of different scenarios that varied in domain width and boundary condition implementation. A subset of the...

  1. Adaptive fuzzy logic controller with direct action type structures for InnoSAT attitude control system

    NASA Astrophysics Data System (ADS)

    Bakri, F. A.; Mashor, M. Y.; Sharun, S. M.; Bibi Sarpinah, S. N.; Abu Bakar, Z.

    2016-10-01

    This study proposes an adaptive fuzzy controller for attitude control system (ACS) of Innovative Satellite (InnoSAT) based on direct action type structure. In order to study new methods used in satellite attitude control, this paper presents three structures of controllers: Fuzzy PI, Fuzzy PD and conventional Fuzzy PID. The objective of this work is to compare the time response and tracking performance among the three different structures of controllers. The parameters of controller were tuned on-line by adjustment mechanism, which was an approach similar to a PID error that could minimize errors between actual and model reference output. This paper also presents a Model References Adaptive Control (MRAC) as a control scheme to control time varying systems where the performance specifications were given in terms of the reference model. All the controllers were tested using InnoSAT system under some operating conditions such as disturbance, varying gain, measurement noise and time delay. In conclusion, among all considered DA-type structures, AFPID controller was observed as the best structure since it outperformed other controllers in most conditions.

  2. Determination of airway humidification in high-frequency oscillatory ventilation using an artificial neonatal lung model. Comparison of a heated humidifier and a heat and moisture exchanger.

    PubMed

    Schiffmann, H; Singer, S; Singer, D; von Richthofen, E; Rathgeber, J; Züchner, K

    1999-09-01

    Thus far only few data are available on airway humidification during high-frequency oscillatory ventilation (HFOV). Therefore, we studied the performance and efficiency of a heated humidifier (HH) and a heat and moisture exchanger (HME) in HFOV using an artificial lung model. Experiments were performed with a pediatric high-frequency oscillatory ventilator. The artificial lung contained a sponge saturated with water to simulate evaporation and was placed in an incubator heated to 37 degrees C to prevent condensation. The airway humidity was measured using a capacitive humidity sensor. The water loss of the lung model was determined gravimetrically. The water loss of the lung model varied between 2.14 and 3.1 g/h during active humidification; it was 2.85 g/h with passive humidification and 7.56 g/h without humidification. The humidity at the tube connector varied between 34. 2 and 42.5 mg/l, depending on the temperature of the HH and the ventilator setting during active humidification, and between 37 and 39.9 mg/l with passive humidification. In general, HH and HME are suitable devices for airway humidification in HFOV. The performance of the ventilator was not significantly influenced by the mode of humidification. However, the adequacy of humidification and safety of the HME remains to be demonstrated in clinical practice.

  3. Evaluation of annual, global seismicity forecasts, including ensemble models

    NASA Astrophysics Data System (ADS)

    Taroni, Matteo; Zechar, Jeremy; Marzocchi, Warner

    2013-04-01

    In 2009, the Collaboratory for the Study of the Earthquake Predictability (CSEP) initiated a prototype global earthquake forecast experiment. Three models participated in this experiment for 2009, 2010 and 2011—each model forecast the number of earthquakes above magnitude 6 in 1x1 degree cells that span the globe. Here we use likelihood-based metrics to evaluate the consistency of the forecasts with the observed seismicity. We compare model performance with statistical tests and a new method based on the peer-to-peer gambling score. The results of the comparisons are used to build ensemble models that are a weighted combination of the individual models. Notably, in these experiments the ensemble model always performs significantly better than the single best-performing model. Our results indicate the following: i) time-varying forecasts, if not updated after each major shock, may not provide significant advantages with respect to time-invariant models in 1-year forecast experiments; ii) the spatial distribution seems to be the most important feature to characterize the different forecasting performances of the models; iii) the interpretation of consistency tests may be misleading because some good models may be rejected while trivial models may pass consistency tests; iv) a proper ensemble modeling seems to be a valuable procedure to get the best performing model for practical purposes.

  4. Comparing multiple turbulence restoration algorithms performance on noisy anisoplanatic imagery

    NASA Astrophysics Data System (ADS)

    Rucci, Michael A.; Hardie, Russell C.; Dapore, Alexander J.

    2017-05-01

    In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching method with restoration filter. These algorithms were chosen because they incorporate different approaches and processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated degraded imagery.

  5. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    PubMed

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. A new empirical model to estimate hourly diffuse photosynthetic photon flux density

    NASA Astrophysics Data System (ADS)

    Foyo-Moreno, I.; Alados, I.; Alados-Arboledas, L.

    2018-05-01

    Knowledge of the photosynthetic photon flux density (Qp) is critical in different applications dealing with climate change, plant physiology, biomass production, and natural illumination in greenhouses. This is particularly true regarding its diffuse component (Qpd), which can enhance canopy light-use efficiency and thereby boost carbon uptake. Therefore, diffuse photosynthetic photon flux density is a key driving factor of ecosystem-productivity models. In this work, we propose a model to estimate this component, using a previous model to calculate Qp and furthermore divide it into its components. We have used measurements in urban Granada (southern Spain), of global solar radiation (Rs) to study relationships between the ratio Qpd/Rs with different parameters accounting for solar position, water-vapour absorption and sky conditions. The model performance has been validated with experimental measurements from sites having varied climatic conditions. The model provides acceptable results, with the mean bias error and root mean square error varying between - 0.3 and - 8.8% and between 9.6 and 20.4%, respectively. Direct measurements of this flux are very scarce so that modelling simulations are needed, this is particularly true regarding its diffuse component. We propose a new parameterization to estimate this component using only measured data of solar global irradiance, which facilitates its use for the construction of long-term data series of PAR in regions where continuous measurements of PAR are not yet performed.

  7. 3D engineered fiberboard : finite element analysis of a new building product

    Treesearch

    John F. Hunt

    2004-01-01

    This paper presents finite element analyses that are being used to analyze and estimate the structural performance of a new product called 3D engineered fiberboard in bending and flat-wise compression applications. A 3x3x2 split-plot experimental design was used to vary geometry configurations to determine their effect on performance properties. The models are based on...

  8. Tracking the visual focus of attention for a varying number of wandering people.

    PubMed

    Smith, Kevin; Ba, Sileye O; Odobez, Jean-Marc; Gatica-Perez, Daniel

    2008-07-01

    We define and address the problem of finding the visual focus of attention for a varying number of wandering people (VFOA-W), determining where the people's movement is unconstrained. VFOA-W estimation is a new and important problem with mplications for behavior understanding and cognitive science, as well as real-world applications. One such application, which we present in this article, monitors the attention passers-by pay to an outdoor advertisement. Our approach to the VFOA-W problem proposes a multi-person tracking solution based on a dynamic Bayesian network that simultaneously infers the (variable) number of people in a scene, their body locations, their head locations, and their head pose. For efficient inference in the resulting large variable-dimensional state-space we propose a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampling scheme, as well as a novel global observation model which determines the number of people in the scene and localizes them. We propose a Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM)-based VFOA-W model which use head pose and location information to determine people's focus state. Our models are evaluated for tracking performance and ability to recognize people looking at an outdoor advertisement, with results indicating good performance on sequences where a moderate number of people pass in front of an advertisement.

  9. Automatic translation of digraph to fault-tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.

    1992-01-01

    The author presents a technique for converting digraph models, including those models containing cycles, to a fault-tree format. A computer program which automatically performs this translation using an object-oriented representation of the models has been developed. The fault-trees resulting from translations can be used for fault-tree analysis and diagnosis. Programs to calculate fault-tree and digraph cut sets and perform diagnosis with fault-tree models have also been developed. The digraph to fault-tree translation system has been successfully tested on several digraphs of varying size and complexity. Details of some representative translation problems are presented. Most of the computation performed by the program is dedicated to finding minimal cut sets for digraph nodes in order to break cycles in the digraph. Fault-trees produced by the translator have been successfully used with NASA's Fault-Tree Diagnosis System (FTDS) to produce automated diagnostic systems.

  10. Polarized-pixel performance model for DoFP polarimeter

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Shi, Zelin; Liu, Haizheng; Liu, Li; Zhao, Yaohong; Zhang, Junchao

    2018-06-01

    A division of a focal plane (DoFP) polarimeter is manufactured by placing a micropolarizer array directly onto the focal plane array (FPA) of a detector. Each element of the DoFP polarimeter is a polarized pixel. This paper proposes a performance model for a polarized pixel. The proposed model characterizes the optical and electronic performance of a polarized pixel by three parameters. They are respectively major polarization responsivity, minor polarization responsivity and polarization orientation. Each parameter corresponds to an intuitive physical feature of a polarized pixel. This paper further extends this model to calibrate polarization images from a DoFP (division of focal plane) polarimeter. This calibration work is evaluated quantitatively by a developed DoFP polarimeter under varying illumination intensity and angle of linear polarization. The experiment proves that our model reduces nonuniformity to 6.79% of uncalibrated DoLP (degree of linear polarization) images, and significantly improves the visual effect of DoLP images.

  11. Experimental verification of a real-time tuning method of a model-based controller by perturbations to its poles

    NASA Astrophysics Data System (ADS)

    Kajiwara, Itsuro; Furuya, Keiichiro; Ishizuka, Shinichi

    2018-07-01

    Model-based controllers with adaptive design variables are often used to control an object with time-dependent characteristics. However, the controller's performance is influenced by many factors such as modeling accuracy and fluctuations in the object's characteristics. One method to overcome these negative factors is to tune model-based controllers. Herein we propose an online tuning method to maintain control performance for an object that exhibits time-dependent variations. The proposed method employs the poles of the controller as design variables because the poles significantly impact performance. Specifically, we use the simultaneous perturbation stochastic approximation (SPSA) to optimize a model-based controller with multiple design variables. Moreover, a vibration control experiment of an object with time-dependent characteristics as the temperature is varied demonstrates that the proposed method allows adaptive control and stably maintains the closed-loop characteristics.

  12. Improving causal inference with a doubly robust estimator that combines propensity score stratification and weighting.

    PubMed

    Linden, Ariel

    2017-08-01

    When a randomized controlled trial is not feasible, health researchers typically use observational data and rely on statistical methods to adjust for confounding when estimating treatment effects. These methods generally fall into 3 categories: (1) estimators based on a model for the outcome using conventional regression adjustment; (2) weighted estimators based on the propensity score (ie, a model for the treatment assignment); and (3) "doubly robust" (DR) estimators that model both the outcome and propensity score within the same framework. In this paper, we introduce a new DR estimator that utilizes marginal mean weighting through stratification (MMWS) as the basis for weighted adjustment. This estimator may prove more accurate than treatment effect estimators because MMWS has been shown to be more accurate than other models when the propensity score is misspecified. We therefore compare the performance of this new estimator to other commonly used treatment effects estimators. Monte Carlo simulation is used to compare the DR-MMWS estimator to regression adjustment, 2 weighted estimators based on the propensity score and 2 other DR methods. To assess performance under varied conditions, we vary the level of misspecification of the propensity score model as well as misspecify the outcome model. Overall, DR estimators generally outperform methods that model one or the other components (eg, propensity score or outcome). The DR-MMWS estimator outperforms all other estimators when both the propensity score and outcome models are misspecified and performs equally as well as other DR estimators when only the propensity score is misspecified. Health researchers should consider using DR-MMWS as the principal evaluation strategy in observational studies, as this estimator appears to outperform other estimators in its class. © 2017 John Wiley & Sons, Ltd.

  13. High-resolution compact spectrometer based on a custom-printed varied-line-spacing concave blazed grating.

    PubMed

    Chen, Jianwei; Chen, Wang; Zhang, Guodong; Lin, Hui; Chen, Shih-Chi

    2017-05-29

    We present the modeling, design and characterization of a compact spectrometer, achieving a resolution better than 1.5 nm throughout the visible spectrum (360-825 nm). The key component in the spectrometer is a custom-printed varied-line-space (VLS) concave blazed grating, where the groove density linearly decreases from the center of the grating (530 g/mm) at a rate of 0.58 nm/mm to the edge (528 g/mm). Parametric models have been established to deterministically link the system performance with the VLS grating design parameters, e.g., groove density, line-space varying rate, and to minimize the system footprint. Simulations have been performed in ZEMAX to confirm the results, indicating a 15% enhancement in system resolution versus common constant line-space (CLS) gratings. Next, the VLS concave blazed grating is fabricated via our vacuum nanoimprinting system, where a polydimethylsiloxane (PDMS) stamp is non-uniformly expanded to form the varied-line-spacing pattern from a planar commercial grating master (600 g/mm) for precision imprinting. The concave blazed grating is measured to have an absolute diffraction efficiency of 43%, higher than typical holographic gratings (~30%) used in the commercial compact spectrometers. The completed compact spectrometer contains only one optical component, i.e., the VLS concave grating, as well as an entrance slit and linear photodetector array, achieving a footprint of 11 × 11 × 3 cm 3 , which makes it the most compact and resolving (1.46 nm) spectrometer of its kind.

  14. A comparative analysis of biclustering algorithms for gene expression data

    PubMed Central

    Eren, Kemal; Deveci, Mehmet; Küçüktunç, Onur; Çatalyürek, Ümit V.

    2013-01-01

    The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters. PMID:22772837

  15. Experimentally Identify the Effective Plume Chimney over a Natural Draft Chimney Model

    NASA Astrophysics Data System (ADS)

    Rahman, M. M.; Chu, C. M.; Tahir, A. M.; Ismail, M. A. bin; Misran, M. S. bin; Ling, L. S.

    2017-07-01

    The demands of energy are in increasing order due to rapid industrialization and urbanization. The researchers and scientists are working hard to improve the performance of the industry so that the energy consumption can be reduced significantly. Industries like power plant, timber processing plant, oil refinery, etc. performance mainly depend on the cooling tower chimney’s performance, either natural draft or forced draft. Chimney is used to create sufficient draft, so that air can flow through it. Cold inflow or flow reversal at chimney exit is one of the main identified problems that may alter the overall plant performance. The presence Effective Plume Chimney (EPC) is an indication of cold inflow free operation of natural draft chimney. Different mathematical model equations are used to estimate the EPC height over the heat exchanger or hot surface. In this paper, it is aim to identify the EPC experimentally. In order to do that, horizontal temperature profiling is done at the exit of the chimneys of face area 0.56m2, 1.00m2 and 2.25m2. A wire mesh screen is installed at chimneys exit to ensure cold inflow chimney operation. It is found that EPC exists in all modified chimney models and the heights of EPC varied from 1 cm to 9 cm. The mathematical models indicate that the estimated heights of EPC varied from 1 cm to 2.3 cm. Smoke test is also conducted to ensure the existence of EPC and cold inflow free option of chimney. Smoke test results confirmed the presence of EPC and cold inflow free operation of chimney. The performance of the cold inflow free chimney is increased by 50% to 90% than normal chimney.

  16. Annoyance to Noise Produced by a Distributed Electric Propulsion High-Lift System

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Palumbo, Daniel L.; Rathsam, Jonathan; Christian, Andrew; Rafaelof, Menachem

    2017-01-01

    A psychoacoustic test was performed using simulated sounds from a distributed electric propulsion aircraft concept to help understand factors associated with human annoyance. A design space spanning the number of high-lift leading edge propellers and their relative operating speeds, inclusive of time varying effects associated with motor controller error and atmospheric turbulence, was considered. It was found that the mean annoyance response varies in a statistically significant manner with the number of propellers and with the inclusion of time varying effects, but does not differ significantly with the relative RPM between propellers. An annoyance model was developed, inclusive of confidence intervals, using the noise metrics of loudness, roughness, and tonality as predictors.

  17. Investigating the Effect of Damage Progression Model Choice on Prognostics Performance

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil; Narasimhan, Sriram; Saha, Sankalita; Saha, Bhaskar; Goebel, Kai

    2011-01-01

    The success of model-based approaches to systems health management depends largely on the quality of the underlying models. In model-based prognostics, it is especially the quality of the damage progression models, i.e., the models describing how damage evolves as the system operates, that determines the accuracy and precision of remaining useful life predictions. Several common forms of these models are generally assumed in the literature, but are often not supported by physical evidence or physics-based analysis. In this paper, using a centrifugal pump as a case study, we develop different damage progression models. In simulation, we investigate how model changes influence prognostics performance. Results demonstrate that, in some cases, simple damage progression models are sufficient. But, in general, the results show a clear need for damage progression models that are accurate over long time horizons under varied loading conditions.

  18. Comparison of exposure estimation methods for air pollutants: ambient monitoring data and regional air quality simulation.

    PubMed

    Bravo, Mercedes A; Fuentes, Montserrat; Zhang, Yang; Burr, Michael J; Bell, Michelle L

    2012-07-01

    Air quality modeling could potentially improve exposure estimates for use in epidemiological studies. We investigated this application of air quality modeling by estimating location-specific (point) and spatially-aggregated (county level) exposure concentrations of particulate matter with an aerodynamic diameter less than or equal to 2.5 μm (PM(2.5)) and ozone (O(3)) for the eastern U.S. in 2002 using the Community Multi-scale Air Quality (CMAQ) modeling system and a traditional approach using ambient monitors. The monitoring approach produced estimates for 370 and 454 counties for PM(2.5) and O(3), respectively. Modeled estimates included 1861 counties, covering 50% more population. The population uncovered by monitors differed from those near monitors (e.g., urbanicity, race, education, age, unemployment, income, modeled pollutant levels). CMAQ overestimated O(3) (annual normalized mean bias=4.30%), while modeled PM(2.5) had an annual normalized mean bias of -2.09%, although bias varied seasonally, from 32% in November to -27% in July. Epidemiology may benefit from air quality modeling, with improved spatial and temporal resolution and the ability to study populations far from monitors that may differ from those near monitors. However, model performance varied by measure of performance, season, and location. Thus, the appropriateness of using such modeled exposures in health studies depends on the pollutant and metric of concern, acceptable level of uncertainty, population of interest, study design, and other factors. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Revisiting crash spatial heterogeneity: A Bayesian spatially varying coefficients approach.

    PubMed

    Xu, Pengpeng; Huang, Helai; Dong, Ni; Wong, S C

    2017-01-01

    This study was performed to investigate the spatially varying relationships between crash frequency and related risk factors. A Bayesian spatially varying coefficients model was elaborately introduced as a methodological alternative to simultaneously account for the unstructured and spatially structured heterogeneity of the regression coefficients in predicting crash frequencies. The proposed method was appealing in that the parameters were modeled via a conditional autoregressive prior distribution, which involved a single set of random effects and a spatial correlation parameter with extreme values corresponding to pure unstructured or pure spatially correlated random effects. A case study using a three-year crash dataset from the Hillsborough County, Florida, was conducted to illustrate the proposed model. Empirical analysis confirmed the presence of both unstructured and spatially correlated variations in the effects of contributory factors on severe crash occurrences. The findings also suggested that ignoring spatially structured heterogeneity may result in biased parameter estimates and incorrect inferences, while assuming the regression coefficients to be spatially clustered only is probably subject to the issue of over-smoothness. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Modeling the seasonal circulation in Massachusetts Bay

    USGS Publications Warehouse

    Signell, Richard P.; Jenter, Harry L.; Blumberg, Alan F.; ,

    1994-01-01

    An 18 month simulation of circulation was conducted in Massachusetts Bay, a roughly 35 m deep, 100??50 km embayment on the northeastern shelf of the United States. Using a variant of the Blumberg-Mellor (1987) model, it was found that a continuous 18 month run was only possible if the velocity field was Shapiro filtered to remove two grid length energy that developed along the open boundary due to mismatch in locally generated and climatologically forced water properties. The seasonal development of temperature and salinity stratification was well-represented by the model once ??-coordinate errors were reduced by subtracting domain averaged vertical profiles of temperature, salinity and density before horizontal differencing was performed. Comparison of modeled and observed subtidal currents at fixed locations revealed that the model performance varies strongly with season and distance from the open boundaries. The model performs best during unstratified conditions, and in the interior of the bay. The model performs poorest during stratified conditions and in the regions where the bay is driven predominantly by remote fluctuations from the Gulf of Maine.

  1. How do we choose the best model? The impact of cross-validation design on model evaluation for buried threat detection in ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Malof, Jordan M.; Reichman, Daniël.; Collins, Leslie M.

    2018-04-01

    A great deal of research has been focused on the development of computer algorithms for buried threat detection (BTD) in ground penetrating radar (GPR) data. Most recently proposed BTD algorithms are supervised, and therefore they employ machine learning models that infer their parameters using training data. Cross-validation (CV) is a popular method for evaluating the performance of such algorithms, in which the available data is systematically split into ܰ disjoint subsets, and an algorithm is repeatedly trained on ܰ-1 subsets and tested on the excluded subset. There are several common types of CV in BTD, which vary principally upon the spatial criterion used to partition the data: site-based, lane-based, region-based, etc. The performance metrics obtained via CV are often used to suggest the superiority of one model over others, however, most studies utilize just one type of CV, and the impact of this choice is unclear. Here we employ several types of CV to evaluate algorithms from a recent large-scale BTD study. The results indicate that the rank-order of the performance of the algorithms varies substantially depending upon which type of CV is used. For example, the rank-1 algorithm for region-based CV is the lowest ranked algorithm for site-based CV. This suggests that any algorithm results should be interpreted carefully with respect to the type of CV employed. We discuss some potential interpretations of performance, given a particular type of CV.

  2. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  3. Effect of flow-pressure phase on performance of regenerators in the range of 4 K to 20 K

    NASA Astrophysics Data System (ADS)

    Lewis, M. A.; Taylor, R. P.; Bradley, P. E.; Radebaugh, R.

    2014-01-01

    Modeling with REGEN3.3 has shown that the phase between flow and pressure at the cold end of 4 K regenerators has a large effect on their second-law efficiency. The use of inertance tubes in small 4 K pulse tube cryocoolers has limited phase-shifting ability, and their phase shift cannot be varied unless their dimensions are varied. We report here on the use of a miniature linear compressor, operating at the pulse tube warm end of about 30 K, as a controllable expander that can be used to vary the phase over 360°. We also use the back EMF of the linear motor to measure the acoustic power, flow rate amplitude, and phase between flow and pressure at the piston face. We discuss the measurements of the linear motor parameters that are required to determine the piston velocity from the back EMF as well as the measurement procedures to determine the back EMF when the expander is operating at a temperature around 30 K. Our experimental results on the performance of a regenerator/pulse tube stage operating below 30 K show an optimum performance when the flow at the phase shifter lags the pressure by about 65° to 80°, which is close to the model results of about 60°. Temperatures below 10 K were achieved at the cold end in these measurements. The efficiency of the compressor operating as an expander is also discussed.

  4. Formation of double front detonations of a condensed-phase explosive with powdered aluminium

    NASA Astrophysics Data System (ADS)

    Kim, Wuhyun; Gwak, Min-cheol; Yoh, Jack J.

    2018-03-01

    The performance characteristics of aluminised high explosive are considered by varying the aluminium (Al) mass fraction in a hybrid non-ideal detonation model. Since the time scales of the characteristic induction and combustion of high explosives and Al particles differ, the process of energy release behind the leading detonation wave front occurs over an extended period of time. Two cardinal observations are reported: a decrease in detonation velocity with an increase in Al mass fraction and a double front detonation (DFD) feature when anaerobic Al reaction occurs behind the front. In order to simulate the performance characteristics due to the varying Al mass fraction, the tetrahexamine tetranitramine (HMX) is considered as a base high explosive when formulating the multiphase conservation laws of mass, momentum, and energy exchanges between particles and HMX product gases. While experimental studies have been reported on the effect of Al mass fraction on both gas-phase and solid-phase detonations, the numerical investigations have been limited to only gas-phase detonation for the varying Al particles in the mixture. In the current study, a two-phase model is utilised for understanding the volumetric effects of Al mass fraction in condensed phase detonations. A series of unconfined and confined rate sticks are considered for characterising the performance of aluminised HMX with a maximum Al mass fraction of 50%. The simulated results are compared with the experimental data for 5-25% mass fractions, and the higher mass fraction behaviours are consistent with the experimental observations.

  5. The Importance of Specific Workplace Environment Characteristics for Maximum Health and Performance: Healthcare Workers' Perspective.

    PubMed

    Sagha Zadeh, Rana; Shepley, Mardelle M; Owora, Arthur Hamie; Dannenbaum, Martha C; Waggener, Laurie T; Chung, Susan Sung Eun

    2018-05-01

    To examine the importance of specific workplace environment characteristics for maximum health and performance, assigned by healthcare employees, and how they relate to the nature of their work. A cross-sectional mixed-method study was conducted with content analysis and robust regression models to examine the relationship between workplace environment characteristics and perceived importance in promoting health and performance. Our findings suggest that perceptions of key environment characteristics that safeguard health and performance in healthcare workplaces may vary by employee sex, setting, and nature of healthcare work involved. Theme and model descriptions of the influence of these factors on participant perceptions are provided. Employee feedback on workplace characteristics that impact health and performance could be instrumental in determining the priorities of workplace design.

  6. Sixth Graders Investigate Models and Designs through Teacher-Directed and Student-Centered Inquiry Lessons: Effects on Performance and Attitudes

    ERIC Educational Resources Information Center

    Olsen, Benjamin D.; Rule, Audrey C.

    2016-01-01

    Science inquiry has been found to be effective with students from diverse backgrounds and varied academic abilities. This study compared student learning, enjoyment, motivation, perceived understanding, and creativity during a science unit on Models and Designs for 38 sixth grade students (20 male, 18 female; 1 Black, 1 Hispanic and 36 White). The…

  7. An Evaluation of Two English Language Learner (ELL) Instructional Models at School District ABC: Pull-In and Push-Out

    ERIC Educational Resources Information Center

    Lloyd, Sonya LaShawn

    2014-01-01

    Providing academic assistance to English Language Learners (ELLs) is varied and often ineffective. The purpose of this causal-comparative study was to determine if there was a relationship between 9th grade students' performance on the High School Graduation Exam (HSGE) in reading and language and the Push-in and Pull-out models of instruction.…

  8. On improving the performance of nonphotochemical quenching in CP29 light-harvesting antenna complex

    DOE PAGES

    Berman, Gennady Petrovich; Nesterov, Alexander I.; Sayre, Richard Thomas; ...

    2016-02-02

    In this study, we model and simulate the performance of charge-transfer in nonphotochemical quenching (NPQ) in the CP29 light-harvesting antenna-complex associated with photosystem II (PSII). The model consists of five discrete excitonic energy states and two sinks, responsible for the potentially damaging processes and charge-transfer channels, respectively. We demonstrate that by varying (i) the parameters of the chlorophyll-based dimer, (ii) the resonant properties of the protein-solvent environment interaction, and (iii) the energy transfer rates to the sinks, one can significantly improve the performance of the NPQ. In conclusion, our analysis suggests strategies for improving the performance of the NPQ inmore » response to environmental changes, and may stimulate experimental verification.« less

  9. TEMPORALLY-RESOLVED AMMONIA EMISSION INVENTORIES: CURRENT ESTIMATES, EVALUATION TOOLS, AND MEASUREMENT NEEDS

    EPA Science Inventory

    In this study, we evaluate the suitability of a three-dimensional chemical transport model (CTM) as a tool for assessing ammonia emission inventories, calculate the improvement in CTM performance owing to recent advances in temporally-varying ammonia emission estimates, and ident...

  10. Intelligent Engine Systems: Acoustics

    NASA Technical Reports Server (NTRS)

    Wojno, John; Martens, Steve; Simpson, Benjamin

    2008-01-01

    An extensive study of new fan exhaust nozzle technologies was performed. Three new uniform chevron nozzles were designed, based on extensive CFD analysis. Two new azimuthally varying variants were defined. All five were tested, along with two existing nozzles, on a representative model-scale, medium BPR exhaust nozzle. Substantial acoustic benefits were obtained from the uniform chevron nozzle designs, the best benefit being provided by an existing design. However, one of the azimuthally varying nozzle designs exhibited even better performance than any of the uniform chevron nozzles. In addition to the fan chevron nozzles, a new technology was demonstrated, using devices that enhance mixing when applied to an exhaust nozzle. The acoustic benefits from these devices applied to medium BPR nozzles were similar, and in some cases superior to, those obtained from conventional uniform chevron nozzles. However, none of the low noise technologies provided equivalent acoustic benefits on a model-scale high BPR exhaust nozzle, similar to current large commercial applications. New technologies must be identified to improve the acoustics of state-of-the-art high BPR jet engines.

  11. Effects of two types of intra-team feedback on developing a shared mental model in Command & Control teams.

    PubMed

    Rasker, P C; Post, W M; Schraagen, J M

    2000-08-01

    In two studies, the effect of two types of intra-team feedback on developing a shared mental model in Command & Control teams was investigated. A distinction is made between performance monitoring and team self-correction. Performance monitoring is the ability of team members to monitor each other's task execution and give feedback during task execution. Team self-correction is the process in which team members engage in evaluating their performance and in determining their strategies after task execution. In two experiments the opportunity to engage in performance monitoring, respectively team self-correction, was varied systematically. Both performance monitoring as well as team self-correction appeared beneficial in the improvement of team performance. Teams that had the opportunity to engage in performance monitoring, however, performed better than teams that had the opportunity to engage in team self-correction.

  12. Anion exchange membrane fuel cell modelling

    NASA Astrophysics Data System (ADS)

    Fragiacomo, P.; Astorino, E.; Chippari, G.; De Lorenzo, G.; Czarnetzki, W. T.; Schneider, W.

    2018-04-01

    A parametric model predicting the performance of a solid polymer electrolyte, anion exchange membrane fuel cell (AEMFC), has been developed, in Matlab environment, based on interrelated electrical and thermal models. The electrical model proposed is developed by modelling an AEMFC open-circuit output voltage, irreversible voltage losses along with a mass balance, while the thermal model is based on the energy balance. The proposed model of the AEMFC stack estimates its dynamic behaviour, in particular the operating temperature variation for different discharge current values. The results of the theoretical fuel cell (FC) stack are reported and analysed in order to highlight the FC performance and how it varies by changing the values of some parameters such as temperature and pressure. Both the electrical and thermal FC models were validated by comparing the model results with experimental data and the results of other models found in the literature.

  13. Applications of psychophysical models to the study of auditory development

    NASA Astrophysics Data System (ADS)

    Werner, Lynne

    2003-04-01

    Psychophysical models of listening, such as the energy detector model, have provided a framework from which to characterize the function of the mature auditory system and to explore how mature listeners make use of auditory information in sound identification. The application of such models to the study of auditory development has similarly provided insight into the characteristics of infant hearing and listening. Infants intensity, frequency, temporal and spatial resolution have been described at least grossly and some contributions of immature listening strategies to infant hearing have been identified. Infants psychoacoustic performance is typically poorer than adults under identical stimulus conditions. However, the infant's performance typically varies with stimulus condition in a way that is qualitatively similar to the adult's performance. In some cases, though, infants perform in a qualitatively different way from adults in psychoacoustic experiments. Further, recent psychoacoustic studies of children suggest that the classic models of listening may be inadequate to describe the children's performance. The characteristics of a model that might be appropriate for the immature listener will be outlined and the implications for models of mature listening will be discussed. [Work supported by NIH grants DC00396 and by DC04661.

  14. Electro-thermal analysis of contact resistance

    NASA Astrophysics Data System (ADS)

    Pandey, Nitin; Jain, Ishant; Reddy, Sudhakar; Gulhane, Nitin P.

    2018-05-01

    Electro-Mechanical characterization over copper samples are performed at the macroscopic level to understand the dependence of electrical contact resistance and temperature on surface roughness and contact pressure. For two different surface roughness levels of samples, six levels of load are selected and varied to capture the bulk temperature rise and electrical contact resistance. Accordingly, the copper samples are modelled and analysed using COMSOLTM as a simulation package and the results are validated by the experiments. The interface temperature during simulation is obtained using Mikic-Elastic correlation and by directly entering experimental contact resistance value. The load values are varied and then reversed in a similar fashion to capture the hysteresis losses. The governing equations & assumptions underlying these models and their significance are examined & possible justification for the observed variations are discussed. Equivalent Greenwood model is also predicted by mapping the results of the experiment.

  15. Proportional hazards model with varying coefficients for length-biased data.

    PubMed

    Zhang, Feipeng; Chen, Xuerong; Zhou, Yong

    2014-01-01

    Length-biased data arise in many important applications including epidemiological cohort studies, cancer prevention trials and studies of labor economics. Such data are also often subject to right censoring due to loss of follow-up or the end of study. In this paper, we consider a proportional hazards model with varying coefficients for right-censored and length-biased data, which is used to study the interact effect nonlinearly of covariates with an exposure variable. A local estimating equation method is proposed for the unknown coefficients and the intercept function in the model. The asymptotic properties of the proposed estimators are established by using the martingale theory and kernel smoothing techniques. Our simulation studies demonstrate that the proposed estimators have an excellent finite-sample performance. The Channing House data is analyzed to demonstrate the applications of the proposed method.

  16. Comparison of Colonoscopy Quality Measures Across Various Practice Settings and the Impact of Performance Scorecards.

    PubMed

    Inra, Jennifer A; Nayor, Jennifer; Rosenblatt, Margery; Mutinga, Muthoka; Reddy, Sarathchandra I; Syngal, Sapna; Kastrinos, Fay

    2017-04-01

    Quality performance measures for screening colonoscopy vary among endoscopists. The impact of practice setting is unknown. We aimed to (1) compare screening colonoscopy performance measures among three different US practice settings; (2) evaluate factors associated with adenoma detection; and (3) assess a scorecard intervention on performance metrics. This multi-center prospective study compared patient, endoscopist, and colonoscopy characteristics performed at a tertiary care hospital (TCH), community-based hospital (CBH), and private practice group (PPG). Withdrawal times (WT), cecal intubation, and adenoma detection rates (ADR) were compared by site at baseline and 12 weeks following scorecard distribution. Generalized linear mixed models identified factors associated with adenoma detection. Twenty-eight endoscopists performed colonoscopies on 1987 asymptomatic, average-risk individuals ≥50 years. Endoscopist and patient characteristics were similar across sites. The PPG screened more men (TCH: 42.8%, CBH: 45.0%, PPG: 54.2%; p < 0.0001). Preparation quality varied with good/excellent results in 70.6, 88.3, and 92% of TCH, CBH, and PPG cases, respectively (p < 0.0001). Male ADRs, cecal intubation, and WT exceeded recommended benchmarks despite variable results at each site; female ADRs were <15% at the PPG which screened the fewest females. Performance remained unchanged following scorecard distribution. Adenoma detection was associated with increasing patient age, male gender, WT, adequate preparation, but not practice setting. Each practice performed high-quality screening colonoscopy. Scorecards did not improve performance metrics. Preparation quality varies among practice settings and can be modified to improve adenoma detection.

  17. Coupling the Weather Research and Forecasting (WRF) model and Large Eddy Simulations with Actuator Disk Model: predictions of wind farm power production

    NASA Astrophysics Data System (ADS)

    Garcia Cartagena, Edgardo Javier; Santoni, Christian; Ciri, Umberto; Iungo, Giacomo Valerio; Leonardi, Stefano

    2015-11-01

    A large-scale wind farm operating under realistic atmospheric conditions is studied by coupling a meso-scale and micro-scale models. For this purpose, the Weather Research and Forecasting model (WRF) is coupled with an in-house LES solver for wind farms. The code is based on a finite difference scheme, with a Runge-Kutta, fractional step and the Actuator Disk Model. The WRF model has been configured using seven one-way nested domains where the child domain has a mesh size one third of its parent domain. A horizontal resolution of 70 m is used in the innermost domain. A section from the smallest and finest nested domain, 7.5 diameters upwind of the wind farm is used as inlet boundary condition for the LES code. The wind farm consists in six-turbines aligned with the mean wind direction and streamwise spacing of 10 rotor diameters, (D), and 2.75D in the spanwise direction. Three simulations were performed by varying the velocity fluctuations at the inlet: random perturbations, precursor simulation, and recycling perturbation method. Results are compared with a simulation on the same wind farm with an ideal uniform wind speed to assess the importance of the time varying incoming wind velocity. Numerical simulations were performed at TACC (Grant CTS070066). This work was supported by NSF, (Grant IIA-1243482 WINDINSPIRE).

  18. Lamtoro charcoal (l. leucocephala) as bioreductor in nickel laterite reduction: performance and kinetics study

    NASA Astrophysics Data System (ADS)

    Petrus, H. T. B. M.; Diga, A.; Rhamdani, A. R.; Warmada, I. W.; Yuliansyah, A. T.; Perdana, I.

    2017-04-01

    The performance and kinetic of nickel laterite reduction were studied. In this work, the reduction of nickel laterite ores by anthracite coal, representing the high-grade carbon content matter, and lamtoro charcoal, representing the bioreductor, were conducted in air and CO2 atmosphere, within the temperature ranged from 800°C and 1000°C. XRD analysis was applied to observe the performance of anthracite and lamtoro as a reductor. Two models were applied, sphere particle geometry model and Ginstling-Brounhstein diffusion model, to study the kinetic parameters. The results indicated that the type of reductant and the reduction atmosphere used greatly influence the kinetic parameters. The obtained values of activation energy vary in the range of 13.42-18.12 kcal/mol.

  19. Sensitivity of Space Station alpha joint robust controller to structural modal parameter variations

    NASA Technical Reports Server (NTRS)

    Kumar, Renjith R.; Cooper, Paul A.; Lim, Tae W.

    1991-01-01

    The photovoltaic array sun tracking control system of Space Station Freedom is described. A synthesis procedure for determining optimized values of the design variables of the control system is developed using a constrained optimization technique. The synthesis is performed to provide a given level of stability margin, to achieve the most responsive tracking performance, and to meet other design requirements. Performance of the baseline design, which is synthesized using predicted structural characteristics, is discussed and the sensitivity of the stability margin is examined for variations of the frequencies, mode shapes and damping ratios of dominant structural modes. The design provides enough robustness to tolerate a sizeable error in the predicted modal parameters. A study was made of the sensitivity of performance indicators as the modal parameters of the dominant modes vary. The design variables are resynthesized for varying modal parameters in order to achieve the most responsive tracking performance while satisfying the design requirements. This procedure of reoptimization design parameters would be useful in improving the control system performance if accurate model data are provided.

  20. Spatial variability of chlorophyll and nitrogen content of rice from hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Moharana, Shreedevi; Dutta, Subashisa

    2016-12-01

    Chlorophyll and nitrogen are the most essential parameters for paddy crop growth. Spectroradiometric measurements were collected at canopy level during critical growth period of rice. Chemical analysis was performed to quantify the total leaf content. By exploiting the ground based measurements, regression models were established for chlorophyll and nitrogen aimed indices with their corresponding crop growth variables. Vegetation index models were developed for mapping these parameters from Hyperion imagery in an agriculture system. It was inferred that the present Simple Ratio (SR) and Leaf Nitrogen Concentration (LNC) indices, which followed a linear and nonlinear relationship respectively, were completely different from published Tian et al. (2011). The nitrogen content varied widely from 1 to 4% and only 2 to 3% for paddy crop using present modified index models and Tian et al. (2011) respectively. The modified LNC index model performed better than the established Tian et al. (2011) model as far as estimated nitrogen content from Hyperion imagery was concerned. Furthermore, within the observed chlorophyll range obtained from the studied rice varieties grown in the rice agriculture system, the index models (LNC, OASVI, Gitelson, mSR and MTCI) performed well in the spatial distribution of rice chlorophyll content from Hyperion imagery. Spatial distribution of total chlorophyll content varied widely from 1.77 to 5.81 mg/g (LNC), 3.0 to 13 mg/g (OASVI), 0.5 to 10.43 mg/g (Gitelson), 2.18 to 10.61 mg/g (mSR) and 2.90 to 5.40 mg/g (MTCI). The spatial information of these parameters will help in proper nutrient management, yield forecasting, and will serve as inputs for crop growth and forecasting models for a precision rice agriculture system.

  1. Time Varying Compensator Design for Reconfigurable Structures Using Non-Collocated Feedback

    NASA Technical Reports Server (NTRS)

    Scott, Michael A.

    1996-01-01

    Analysis and synthesis tools are developed to improved the dynamic performance of reconfigurable nonminimum phase, nonstrictly positive real-time variant systems. A novel Spline Varying Optimal (SVO) controller is developed for the kinematic nonlinear system. There are several advantages to using the SVO controller, in which the spline function approximates the system model, observer, and controller gain. They are: The spline function approximation is simply connected, thus the SVO controller is more continuous than traditional gain scheduled controllers when implemented on a time varying plant; ft is easier for real-time implementations in storage and computational effort; where system identification is required, the spline function requires fewer experiments, namely four experiments; and initial startup estimator transients are eliminated. The SVO compensator was evaluated on a high fidelity simulation of the Shuttle Remote Manipulator System. The SVO controller demonstrated significant improvement over the present arm performance: (1) Damping level was improved by a factor of 3; and (2) Peak joint torque was reduced by a factor of 2 following Shuttle thruster firings.

  2. Keep it simple? Predicting primary health care costs with clinical morbidity measures

    PubMed Central

    Brilleman, Samuel L.; Gravelle, Hugh; Hollinghurst, Sandra; Purdy, Sarah; Salisbury, Chris; Windmeijer, Frank

    2014-01-01

    Models of the determinants of individuals’ primary care costs can be used to set capitation payments to providers and to test for horizontal equity. We compare the ability of eight measures of patient morbidity and multimorbidity to predict future primary care costs and examine capitation payments based on them. The measures were derived from four morbidity descriptive systems: 17 chronic diseases in the Quality and Outcomes Framework (QOF); 17 chronic diseases in the Charlson scheme; 114 Expanded Diagnosis Clusters (EDCs); and 68 Adjusted Clinical Groups (ACGs). These were applied to patient records of 86,100 individuals in 174 English practices. For a given disease description system, counts of diseases and sets of disease dummy variables had similar explanatory power. The EDC measures performed best followed by the QOF and ACG measures. The Charlson measures had the worst performance but still improved markedly on models containing only age, gender, deprivation and practice effects. Comparisons of predictive power for different morbidity measures were similar for linear and exponential models, but the relative predictive power of the models varied with the morbidity measure. Capitation payments for an individual patient vary considerably with the different morbidity measures included in the cost model. Even for the best fitting model large differences between expected cost and capitation for some types of patient suggest incentives for patient selection. Models with any of the morbidity measures show higher cost for more deprived patients but the positive effect of deprivation on cost was smaller in better fitting models. PMID:24657375

  3. Impact of rough potentials in rocked ratchet performance

    NASA Astrophysics Data System (ADS)

    Camargo, S.; Anteneodo, C.

    2018-04-01

    We consider thermal ratchets modeled by overdamped Brownian motion in a spatially periodic potential with a tilting process, both unbiased on average. We investigate the impact of the introduction of roughness in the potential profile, over the flux and efficiency of the ratchet. Both amplitude and wavelength that characterize roughness are varied. We show that depending on the ratchet parameters, rugosity can either spoil or enhance the ratchet performance.

  4. The kinetics of lactate production and removal during whole-body exercise

    PubMed Central

    2012-01-01

    Background Based on a literature review, the current study aimed to construct mathematical models of lactate production and removal in both muscles and blood during steady state and at varying intensities during whole-body exercise. In order to experimentally test the models in dynamic situations, a cross-country skier performed laboratory tests while treadmill roller skiing, from where work rate, aerobic power and blood lactate concentration were measured. A two-compartment simulation model for blood lactate production and removal was constructed. Results The simulated and experimental data differed less than 0.5 mmol/L both during steady state and varying sub-maximal intensities. However, the simulation model for lactate removal after high exercise intensities seems to require further examination. Conclusions Overall, the simulation models of lactate production and removal provide useful insight into the parameters that affect blood lactate response, and specifically how blood lactate concentration during practical training and testing in dynamical situations should be interpreted. PMID:22413898

  5. Dimensionality-varied deep convolutional neural network for spectral-spatial classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Qu, Haicheng; Liang, Xuejian; Liang, Shichao; Liu, Wanjun

    2018-01-01

    Many methods of hyperspectral image classification have been proposed recently, and the convolutional neural network (CNN) achieves outstanding performance. However, spectral-spatial classification of CNN requires an excessively large model, tremendous computations, and complex network, and CNN is generally unable to use the noisy bands caused by water-vapor absorption. A dimensionality-varied CNN (DV-CNN) is proposed to address these issues. There are four stages in DV-CNN and the dimensionalities of spectral-spatial feature maps vary with the stages. DV-CNN can reduce the computation and simplify the structure of the network. All feature maps are processed by more kernels in higher stages to extract more precise features. DV-CNN also improves the classification accuracy and enhances the robustness to water-vapor absorption bands. The experiments are performed on data sets of Indian Pines and Pavia University scene. The classification performance of DV-CNN is compared with state-of-the-art methods, which contain the variations of CNN, traditional, and other deep learning methods. The experiment of performance analysis about DV-CNN itself is also carried out. The experimental results demonstrate that DV-CNN outperforms state-of-the-art methods for spectral-spatial classification and it is also robust to water-vapor absorption bands. Moreover, reasonable parameters selection is effective to improve classification accuracy.

  6. Internal performance characteristics of vectored axisymmetric ejector nozzles

    NASA Technical Reports Server (NTRS)

    Lamb, Milton

    1993-01-01

    A series of vectoring axisymmetric ejector nozzles were designed and experimentally tested for internal performance and pumping characteristics at NASA-Langley Research Center. These ejector nozzles used convergent-divergent nozzles as the primary nozzles. The model geometric variables investigated were primary nozzle throat area, primary nozzle expansion ratio, effective ejector expansion ratio (ratio of shroud exit area to primary nozzle throat area), ratio of minimum ejector area to primary nozzle throat area, ratio of ejector upper slot height to lower slot height (measured on the vertical centerline), and thrust vector angle. The primary nozzle pressure ratio was varied from 2.0 to 10.0 depending upon primary nozzle throat area. The corrected ejector-to-primary nozzle weight-flow ratio was varied from 0 (no secondary flow) to approximately 0.21 (21 percent of primary weight-flow rate) depending on ejector nozzle configuration. In addition to the internal performance and pumping characteristics, static pressures were obtained on the shroud walls.

  7. Tribological performance of the biological components of synovial fluid in artificial joint implants

    NASA Astrophysics Data System (ADS)

    Ghosh, Subir; Choudhury, Dipankar; Roy, Taposh; Moradi, Ali; Masjuki, H. H.; Pingguan-Murphy, Belinda

    2015-08-01

    The concentration of biological components of synovial fluid (such as albumin, globulin, hyaluronic acid, and lubricin) varies between healthy persons and osteoarthritis (OA) patients. The aim of the present study is to compare the effects of such variation on tribological performance in a simulated hip joint model. The study was carried out experimentally by utilizing a pin-on-disk simulator on ceramic-on-ceramic (CoC) and ceramic-on-polyethylene (CoP) hip joint implants. The experimental results show that both friction and wear of artificial joints fluctuate with the concentration level of biological components. Moreover, the performance also varies between material combinations. Wear debris sizes and shapes produced by ceramic and polyethylene were diverse. We conclude that the biological components of synovial fluid and their concentrations should be considered in order to select an artificial hip joint to best suit that patient.

  8. Sensitivity analysis of helicopter IMC decelerating steep approach and landing performance to navigation system parameters

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Results of a study to investigate, by means of a computer simulation, the performance sensitivity of helicopter IMC DSAL operations as a function of navigation system parameters are presented. A mathematical model representing generically a navigation system is formulated. The scenario simulated consists of a straight in helicopter approach to landing along a 6 deg glideslope. The deceleration magnitude chosen is 03g. The navigation model parameters are varied and the statistics of the total system errors (TSE) computed. These statistics are used to determine the critical navigation system parameters that affect the performance of the closed-loop navigation, guidance and control system of a UH-1H helicopter.

  9. Computer-Aided Design and Optimization of High-Performance Vacuum Electronic Devices

    DTIC Science & Technology

    2006-08-15

    approximations to the metric, and space mapping wherein low-accuracy (coarse mesh) solutions can potentially be used more effectively in an...interface and algorithm development. • Work on space - mapping or related methods for utilizing models of varying levels of approximation within an

  10. MPS solidification model. Analysis and calculation of macrosegregation in a casting ingot

    NASA Technical Reports Server (NTRS)

    Poirier, D. R.; Maples, A. L.

    1985-01-01

    Work performed on several existing solidification models for which computer codes and documentation were developed is presented. The models describe the solidification of alloys in which there is a time varying zone of coexisting solid and liquid phases; i.e., the S/L zone. The primary purpose of the models is to calculate macrosegregation in a casting or ingot which results from flow of interdendritic liquid in this S/L zone during solidification. The flow, driven by solidification contractions and by gravity acting on density gradients in the interdendritic liquid, is modeled as flow through a porous medium. In Model 1, the steady state model, the heat flow characteristics are those of steady state solidification; i.e., the S/L zone is of constant width and it moves at a constant velocity relative to the mold. In Model 2, the unsteady state model, the width and rate of movement of the S/L zone are allowed to vary with time as it moves through the ingot. Each of these models exists in two versions. Models 1 and 2 are applicable to binary alloys; models 1M and 2M are applicable to multicomponent alloys.

  11. Insights from transformations under way at four Brookings-Dartmouth accountable care organization pilot sites.

    PubMed

    Larson, Bridget K; Van Citters, Aricca D; Kreindler, Sara A; Carluzzo, Kathleen L; Gbemudu, Josette N; Wu, Frances M; Nelson, Eugene C; Shortell, Stephen M; Fisher, Elliott S

    2012-11-01

    This cross-site comparison of the early experience of four provider organizations participating in the Brookings-Dartmouth Accountable Care Organization Collaborative identifies factors that sites perceived as enablers of successful ACO formation and performance. The four pilots varied in size, with between 7,000 and 50,000 attributed patients and 90 to 2,700 participating physicians. The sites had varying degrees of experience with performance-based payments; however, all formed collaborative new relationships with payers and created shared savings agreements linked to performance on quality measures. Each organization devoted major efforts to physician engagement. Policy makers now need to consider how to support and provide incentives for the successful formation of multipayer ACOs, and how to align private-sector and CMS performance measures. Linking providers to learning networks where payers and providers can address common technical issues could help. These sites' transitions to the new payment model constitutes an ongoing journey that will require continual adaptation in the structure of contracts and organizational attributes.

  12. Parameterization of water vapor using high-resolution GPS data and empirical models

    NASA Astrophysics Data System (ADS)

    Ningombam, Shantikumar S.; Jade, Sridevi; Shrungeshwara, T. S.

    2018-03-01

    The present work evaluates eleven existing empirical models to estimate Precipitable Water Vapor (PWV) over a high-altitude (4500 m amsl), cold-desert environment. These models are tested extensively and used globally to estimate PWV for low altitude sites (below 1000 m amsl). The moist parameters used in the model are: water vapor scale height (Hc), dew point temperature (Td) and water vapor pressure (Es 0). These moist parameters are derived from surface air temperature and relative humidity measured at high temporal resolution from automated weather station. The performance of these models are examined statistically with observed high-resolution GPS (GPSPWV) data over the region (2005-2012). The correlation coefficient (R) between the observed GPSPWV and Model PWV is 0.98 at daily data and varies diurnally from 0.93 to 0.97. Parameterization of moisture parameters were studied in-depth (i.e., 2 h to monthly time scales) using GPSPWV , Td , and Es 0 . The slope of the linear relationships between GPSPWV and Td varies from 0.073°C-1 to 0.106°C-1 (R: 0.83 to 0.97) while GPSPWV and Es 0 varied from 1.688 to 2.209 (R: 0.95 to 0.99) at daily, monthly and diurnal time scales. In addition, the moist parameters for the cold desert, high-altitude environment are examined in-depth at various time scales during 2005-2012.

  13. Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions

    NASA Technical Reports Server (NTRS)

    Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations

  14. Lightweight ZERODUR®: Validation of mirror performance and mirror modeling predictions

    NASA Astrophysics Data System (ADS)

    Hull, Anthony B.; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA’s XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2m diameter, f/1.29 88% lightweighted SCHOTT lightweighted ZERODUR® mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR®. In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response (dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR® mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS), and summarize the outcome of NASA’s XRCF tests and model validations.

  15. Feasibility study of direct spectra measurements for Thomson scattered signals for KSTAR fusion-grade plasmas

    NASA Astrophysics Data System (ADS)

    Park, K.-R.; Kim, K.-h.; Kwak, S.; Svensson, J.; Lee, J.; Ghim, Y.-c.

    2017-11-01

    Feasibility study of direct spectra measurements of Thomson scattered photons for fusion-grade plasmas is performed based on a forward model of the KSTAR Thomson scattering system. Expected spectra in the forward model are calculated based on Selden function including the relativistic polarization correction. Noise in the signal is modeled with photon noise and Gaussian electrical noise. Electron temperature and density are inferred using Bayesian probability theory. Based on bias error, full width at half maximum and entropy of posterior distributions, spectral measurements are found to be feasible. Comparisons between spectrometer-based and polychromator-based Thomson scattering systems are performed with varying quantum efficiency and electrical noise levels.

  16. Generalized PSF modeling for optimized quantitation in PET imaging.

    PubMed

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF modeling does not offer optimized PET quantitation, and that PSF overestimation may provide enhanced SUV quantitation. Furthermore, generalized PSF modeling may provide a valuable approach for quantitative tasks such as treatment-response assessment and prognostication.

  17. Physical modelling in biomechanics.

    PubMed Central

    Koehl, M A R

    2003-01-01

    Physical models, like mathematical models, are useful tools in biomechanical research. Physical models enable investigators to explore parameter space in a way that is not possible using a comparative approach with living organisms: parameters can be varied one at a time to measure the performance consequences of each, while values and combinations not found in nature can be tested. Experiments using physical models in the laboratory or field can circumvent problems posed by uncooperative or endangered organisms. Physical models also permit some aspects of the biomechanical performance of extinct organisms to be measured. Use of properly scaled physical models allows detailed physical measurements to be made for organisms that are too small or fast to be easily studied directly. The process of physical modelling and the advantages and limitations of this approach are illustrated using examples from our research on hydrodynamic forces on sessile organisms, mechanics of hydraulic skeletons, food capture by zooplankton and odour interception by olfactory antennules. PMID:14561350

  18. Uncertainty in tsunami sediment transport modeling

    USGS Publications Warehouse

    Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.

    2016-01-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.

  19. Validation for Global Solar Wind Prediction Using Ulysses Comparison: Multiple Coronal and Heliospheric Models Installed at the Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Jian, L. K.; MacNeice, P. J.; Mays, M. L.; Taktakishvili, A.; Odstrcil, D.; Jackson, B.; Yu, H.-S.; Riley, P.; Sokolov, I. V.

    2016-01-01

    The prediction of the background global solar wind is a necessary part of space weather forecasting. Several coronal and heliospheric models have been installed and/or recently upgraded at the Community Coordinated Modeling Center (CCMC), including the Wang-Sheely-Arge (WSA)-Enlil model, MHD-Around-a-Sphere (MAS)-Enlil model, Space Weather Modeling Framework (SWMF), and Heliospheric tomography using interplanetary scintillation data. Ulysses recorded the last fast latitudinal scan from southern to northern poles in 2007. By comparing the modeling results with Ulysses observations over seven Carrington rotations, we have extended our third-party validation from the previous near-Earth solar wind to middle to high latitudes, in the same late declining phase of solar cycle 23. Besides visual comparison, wehave quantitatively assessed the models capabilities in reproducing the time series, statistics, and latitudinal variations of solar wind parameters for a specific range of model parameter settings, inputs, and grid configurations available at CCMC. The WSA-Enlil model results vary with three different magnetogram inputs.The MAS-Enlil model captures the solar wind parameters well, despite its underestimation of the speed at middle to high latitudes. The new version of SWMF misses many solar wind variations probably because it uses lower grid resolution than other models. The interplanetary scintillation-tomography cannot capture the latitudinal variations of solar wind well yet. Because the model performance varies with parameter settings which are optimized for different epochs or flow states, the performance metric study provided here can serve as a template that researchers can use to validate the models for the time periods and conditions of interest to them.

  20. Validation for global solar wind prediction using Ulysses comparison: Multiple coronal and heliospheric models installed at the Community Coordinated Modeling Center

    NASA Astrophysics Data System (ADS)

    Jian, L. K.; MacNeice, P. J.; Mays, M. L.; Taktakishvili, A.; Odstrcil, D.; Jackson, B.; Yu, H.-S.; Riley, P.; Sokolov, I. V.

    2016-08-01

    The prediction of the background global solar wind is a necessary part of space weather forecasting. Several coronal and heliospheric models have been installed and/or recently upgraded at the Community Coordinated Modeling Center (CCMC), including the Wang-Sheely-Arge (WSA)-Enlil model, MHD-Around-a-Sphere (MAS)-Enlil model, Space Weather Modeling Framework (SWMF), and heliospheric tomography using interplanetary scintillation data. Ulysses recorded the last fast latitudinal scan from southern to northern poles in 2007. By comparing the modeling results with Ulysses observations over seven Carrington rotations, we have extended our third-party validation from the previous near-Earth solar wind to middle to high latitudes, in the same late declining phase of solar cycle 23. Besides visual comparison, we have quantitatively assessed the models' capabilities in reproducing the time series, statistics, and latitudinal variations of solar wind parameters for a specific range of model parameter settings, inputs, and grid configurations available at CCMC. The WSA-Enlil model results vary with three different magnetogram inputs. The MAS-Enlil model captures the solar wind parameters well, despite its underestimation of the speed at middle to high latitudes. The new version of SWMF misses many solar wind variations probably because it uses lower grid resolution than other models. The interplanetary scintillation-tomography cannot capture the latitudinal variations of solar wind well yet. Because the model performance varies with parameter settings which are optimized for different epochs or flow states, the performance metric study provided here can serve as a template that researchers can use to validate the models for the time periods and conditions of interest to them.

  1. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  2. Modeling the Distribution of African Savanna Elephants in Kruger National Park: AN Application of Multi-Scale GLOBELAND30 Data

    NASA Astrophysics Data System (ADS)

    Xu, W.; Hays, B.; Fayrer-Hosken, R.; Presotto, A.

    2016-06-01

    The ability of remote sensing to represent ecologically relevant features at multiple spatial scales makes it a powerful tool for studying wildlife distributions. Species of varying sizes perceive and interact with their environment at differing scales; therefore, it is important to consider the role of spatial resolution of remotely sensed data in the creation of distribution models. The release of the Globeland30 land cover classification in 2014, with its 30 m resolution, presents the opportunity to do precisely that. We created a series of Maximum Entropy distribution models for African savanna elephants (Loxodonta africana) using Globeland30 data analyzed at varying resolutions. We compared these with similarly re-sampled models created from the European Space Agency's Global Land Cover Map (Globcover). These data, in combination with GIS layers of topography and distance to roads, human activity, and water, as well as elephant GPS collar data, were used with MaxEnt software to produce the final distribution models. The AUC (Area Under the Curve) scores indicated that the models created from 600 m data performed better than other spatial resolutions and that the Globeland30 models generally performed better than the Globcover models. Additionally, elevation and distance to rivers seemed to be the most important variables in our models. Our results demonstrate that Globeland30 is a valid alternative to the well-established Globcover for creating wildlife distribution models. It may even be superior for applications which require higher spatial resolution and less nuanced classifications.

  3. A Review of Element-Based Galerkin Methods for Numerical Weather Prediction

    DTIC Science & Technology

    2015-04-01

    with body forces to model the effects of gravity and the Earth’s rotation (i.e. Coriolis force). Although the gravitational force varies with both...more phenomena (e.g. resolving non-hydrostatic effects , incorporating more complex moisture parameterizations), their appetite for High Performance...operation effectively ). For instance, the ST-based model NOGAPS, used by the U. S. Navy, could not scale beyond 150 processes at typical resolutions [119

  4. Global horizontal irradiance clear sky models : implementation and analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Joshua S.; Hansen, Clifford W.; Reno, Matthew J.

    2012-03-01

    Clear sky models estimate the terrestrial solar radiation under a cloudless sky as a function of the solar elevation angle, site altitude, aerosol concentration, water vapor, and various atmospheric conditions. This report provides an overview of a number of global horizontal irradiance (GHI) clear sky models from very simple to complex. Validation of clear-sky models requires comparison of model results to measured irradiance during clear-sky periods. To facilitate validation, we present a new algorithm for automatically identifying clear-sky periods in a time series of GHI measurements. We evaluate the performance of selected clear-sky models using measured data from 30 differentmore » sites, totaling about 300 site-years of data. We analyze the variation of these errors across time and location. In terms of error averaged over all locations and times, we found that complex models that correctly account for all the atmospheric parameters are slightly more accurate than other models, but, primarily at low elevations, comparable accuracy can be obtained from some simpler models. However, simpler models often exhibit errors that vary with time of day and season, whereas the errors for complex models vary less over time.« less

  5. LPV control for the full region operation of a wind turbine integrated with synchronous generator.

    PubMed

    Cao, Guoyan; Grigoriadis, Karolos M; Nyanteh, Yaw D

    2015-01-01

    Wind turbine conversion systems require feedback control to achieve reliable wind turbine operation and stable current supply. A robust linear parameter varying (LPV) controller is proposed to reduce the structural loads and improve the power extraction of a horizontal axis wind turbine operating in both the partial load and the full load regions. The LPV model is derived from the wind turbine state space models extracted by FAST (fatigue, aerodynamics, structural, and turbulence) code linearization at different operating points. In order to assure a smooth transition between the two regions, appropriate frequency-dependent varying scaling parametric weighting functions are designed in the LPV control structure. The solution of a set of linear matrix inequalities (LMIs) leads to the LPV controller. A synchronous generator model is connected with the closed LPV control loop for examining the electrical subsystem performance obtained by an inner speed control loop. Simulation results of a 1.5 MW horizontal axis wind turbine model on the FAST platform illustrates the benefit of the LPV control and demonstrates the advantages of this proposed LPV controller, when compared with a traditional gain scheduling PI control and prior LPV control configurations. Enhanced structural load mitigation, improved power extraction, and good current performance were obtained from the proposed LPV control.

  6. A Transport Equation Approach to Modeling the Influence of Surface Roughness on Boundary Layer Transition

    NASA Astrophysics Data System (ADS)

    Langel, Christopher Michael

    A computational investigation has been performed to better understand the impact of surface roughness on the flow over a contaminated surface. This thesis highlights the implementation and development of the roughness amplification model in the flow solver OVERFLOW-2. The model, originally proposed by Dassler, Kozulovic, and Fiala, introduces an additional scalar field roughness amplification quantity. This value is explicitly set at rough wall boundaries using surface roughness parameters and local flow quantities. This additional transport equation allows non-local effects of surface roughness to be accounted for downstream of rough sections. This roughness amplification variable is coupled with the Langtry-Menter model and used to modify the criteria for transition. Results from flat plate test cases show good agreement with experimental transition behavior on the flow over varying sand grain roughness heights. Additional validation studies were performed on a NACA 0012 airfoil with leading edge roughness. The computationally predicted boundary layer development demonstrates good agreement with experimental results. New tests using varying roughness configurations are being carried out at the Texas A&M Oran W. Nicks Low Speed Wind Tunnel to provide further calibration of the roughness amplification method. An overview and preliminary results are provided of this concurrent experimental investigation.

  7. LPV Control for the Full Region Operation of a Wind Turbine Integrated with Synchronous Generator

    PubMed Central

    Grigoriadis, Karolos M.; Nyanteh, Yaw D.

    2015-01-01

    Wind turbine conversion systems require feedback control to achieve reliable wind turbine operation and stable current supply. A robust linear parameter varying (LPV) controller is proposed to reduce the structural loads and improve the power extraction of a horizontal axis wind turbine operating in both the partial load and the full load regions. The LPV model is derived from the wind turbine state space models extracted by FAST (fatigue, aerodynamics, structural, and turbulence) code linearization at different operating points. In order to assure a smooth transition between the two regions, appropriate frequency-dependent varying scaling parametric weighting functions are designed in the LPV control structure. The solution of a set of linear matrix inequalities (LMIs) leads to the LPV controller. A synchronous generator model is connected with the closed LPV control loop for examining the electrical subsystem performance obtained by an inner speed control loop. Simulation results of a 1.5 MW horizontal axis wind turbine model on the FAST platform illustrates the benefit of the LPV control and demonstrates the advantages of this proposed LPV controller, when compared with a traditional gain scheduling PI control and prior LPV control configurations. Enhanced structural load mitigation, improved power extraction, and good current performance were obtained from the proposed LPV control. PMID:25884036

  8. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods.

    PubMed

    Liu, Boquan; Polce, Evan; Sprott, Julien C; Jiang, Jack J

    2018-05-17

    The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos. A diffusive behavior detection-based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions. Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study. The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

  9. Seasonal precipitation forecasting for the Melbourne region using a Self-Organizing Maps approach

    NASA Astrophysics Data System (ADS)

    Pidoto, Ross; Wallner, Markus; Haberlandt, Uwe

    2017-04-01

    The Melbourne region experiences highly variable inter-annual rainfall. For close to a decade during the 2000s, below average rainfall seriously affected the environment, water supplies and agriculture. A seasonal rainfall forecasting model for the Melbourne region based on the novel approach of a Self-Organizing Map has been developed and tested for its prediction performance. Predictor variables at varying lead times were first assessed for inclusion within the model by calculating their importance via Random Forests. Predictor variables tested include the climate indices SOI, DMI and N3.4, in addition to gridded global sea surface temperature data. Five forecasting models were developed: an annual model and four seasonal models, each individually optimized for performance through Pearson's correlation r and the Nash-Sutcliffe Efficiency. The annual model showed a prediction performance of r = 0.54 and NSE = 0.14. The best seasonal model was for spring, with r = 0.61 and NSE = 0.31. Autumn was the worst performing seasonal model. The sea surface temperature data contributed fewer predictor variables compared to climate indices. Most predictor variables were supplied at a minimum lead, however some predictors were found at lead times of up to a year.

  10. The use of interaural parameters during incoherence detection in reproducible noise

    NASA Astrophysics Data System (ADS)

    Goupell, Matthew Joseph

    Interaural incoherence is a measure of the dissimilarity of the signals in the left and right ears. It is important in a number of acoustical phenomenon such as a listener's sensation envelopment and apparent source width in room acoustics, speech intelligibility, and binaural release from energetic masking. Humans are incredibly sensitive to the difference between perfectly coherent and slightly incoherent signals, however the nature of this sensitivity is not well understood. The purpose of this dissertation is to understand what parameters are important to incoherence detection. Incoherence is perceived to have time-varying characteristics. It is conjectured that incoherence detection is performed by a process that takes this time dependency into account. Left-ear-right-ear noise-pairs were generated, all with a fixed value of interaural coherence, 0.9922. The noises had a center frequency of 500 Hz, a bandwidth of 14 Hz, and a duration of 500 ms. Listeners were required to discriminate between these slightly incoherent noises and diotic noises, with a coherence of 1.0. It was found that the value of interaural incoherence itself was an inadequate predictor of discrimination. Instead, incoherence was much more readily detected for those noise-pairs with the largest fluctuations in interaural phase and level differences (as measured by the standard deviation). Noise-pairs with the same value of coherence, and geometric mean frequency of 500 Hz were also generated for bandwidths of 108 Hz and 2394 Hz. It was found that for increasing bandwidth, fluctuations in interaural differences varied less between different noise-pairs and that detection performance varied less as well. The results suggest that incoherence detection is based on the size and the speed of interaural fluctuations and that the value of coherence itself predicts performance only in the wide-band limit where different particular noises with the same incoherence have similar fluctuations. Noise-pairs with short durations of 100, 50, and 25 ms, and bandwidth of 14 Hz, and a coherence of 0.9922 were used to test if a short-term incoherence function is used in incoherence detection. It was found that listeners could significantly use fluctuations of phase and level to detect incoherence for all three of these short durations. Therefore, a short-term coherence function is not used to detect incoherence. For the smallest duration of 25 ms, listeners' detection cue sometimes changed from a "width" cue to a lateralization cue. Modeling of the data was performed. Ten different binaural models were tested against detection data for 14-Hz and 108-Hz bandwidths. These models included different types of binaural processing: independent interaural phase and level differences, lateral position, and short-term cross-correlation. Several preprocessing features were incorporated in the models: compression, temporal averaging, and envelope weighting. For the 14-Hz bandwidth data, the most successful model assumed independent centers for interaural phase and interaural level processing, and this model correlated with detectability at r = 0.87. That model also described the data best when it was assumed that interaural phase fluctuations and interaural level fluctuations contribute approximately equally to incoherence detection. For the 108-Hz bandwidth data, detection performance varied much less among different waveforms, and the data were less able to distinguish between models.

  11. Discovering the Sequential Structure of Thought

    ERIC Educational Resources Information Center

    Anderson, John R.; Fincham, Jon M.

    2014-01-01

    Multi-voxel pattern recognition techniques combined with Hidden Markov models can be used to discover the mental states that people go through in performing a task. The combined method identifies both the mental states and how their durations vary with experimental conditions. We apply this method to a task where participants solve novel…

  12. Employee Assistance Programs: Effective Tools for Counseling Employees.

    ERIC Educational Resources Information Center

    Kraft, Ed

    1991-01-01

    College employee assistance program designs demonstrate the varied needs of a workforce. Whatever the model, the helping approach remains to (1) identify problem employees through performance-related issues; (2) refer them to the assistance program for further intervention; and (3) follow up with employee and supervisor to ensure a successful…

  13. Autonomic Physiological Response Patterns Related to Intelligence

    ERIC Educational Resources Information Center

    Melis, Cor; van Boxtel, Anton

    2007-01-01

    We examined autonomic physiological responses induced by six different cognitive ability tasks, varying in complexity, that were selected on the basis of on Guilford's Structure of Intellect model. In a group of 52 participants, task performance was measured together with nine different autonomic response measures and respiration rate. Weighted…

  14. Coping with Trial-to-Trial Variability of Event Related Signals: A Bayesian Inference Approach

    NASA Technical Reports Server (NTRS)

    Ding, Mingzhou; Chen, Youghong; Knuth, Kevin H.; Bressler, Steven L.; Schroeder, Charles E.

    2005-01-01

    In electro-neurophysiology, single-trial brain responses to a sensory stimulus or a motor act are commonly assumed to result from the linear superposition of a stereotypic event-related signal (e.g. the event-related potential or ERP) that is invariant across trials and some ongoing brain activity often referred to as noise. To extract the signal, one performs an ensemble average of the brain responses over many identical trials to attenuate the noise. To date, h s simple signal-plus-noise (SPN) model has been the dominant approach in cognitive neuroscience. Mounting empirical evidence has shown that the assumptions underlying this model may be overly simplistic. More realistic models have been proposed that account for the trial-to-trial variability of the event-related signal as well as the possibility of multiple differentially varying components within a given ERP waveform. The variable-signal-plus-noise (VSPN) model, which has been demonstrated to provide the foundation for separation and characterization of multiple differentially varying components, has the potential to provide a rich source of information for questions related to neural functions that complement the SPN model. Thus, being able to estimate the amplitude and latency of each ERP component on a trial-by-trial basis provides a critical link between the perceived benefits of the VSPN model and its many concrete applications. In this paper we describe a Bayesian approach to deal with this issue and the resulting strategy is referred to as the differentially Variable Component Analysis (dVCA). We compare the performance of dVCA on simulated data with Independent Component Analysis (ICA) and analyze neurobiological recordings from monkeys performing cognitive tasks.

  15. Parametric study of closed wet cooling tower thermal performance

    NASA Astrophysics Data System (ADS)

    Qasim, S. M.; Hayder, M. J.

    2017-08-01

    The present study involves experimental and theoretical analysis to evaluate the thermal performance of modified Closed Wet Cooling Tower (CWCT). The experimental study includes: design, manufacture and testing prototype of a modified counter flow forced draft CWCT. The modification based on addition packing to the conventional CWCT. A series of experiments was carried out at different operational parameters. In view of energy analysis, the thermal performance parameters of the tower are: cooling range, tower approach, cooling capacity, thermal efficiency, heat and mass transfer coefficients. The theoretical study included develops Artificial Neural Network (ANN) models to predicting various thermal performance parameters of the tower. Utilizing experimental data for training and testing, the models simulated by multi-layer back propagation algorithm for varying all operational parameters stated in experimental test.

  16. Ensemble Kalman filter inference of spatially-varying Manning's n coefficients in the coastal ocean

    NASA Astrophysics Data System (ADS)

    Siripatana, Adil; Mayo, Talea; Knio, Omar; Dawson, Clint; Maître, Olivier Le; Hoteit, Ibrahim

    2018-07-01

    Ensemble Kalman (EnKF) filtering is an established framework for large scale state estimation problems. EnKFs can also be used for state-parameter estimation, using the so-called "Joint-EnKF" approach. The idea is simply to augment the state vector with the parameters to be estimated and assign invariant dynamics for the time evolution of the parameters. In this contribution, we investigate the efficiency of the Joint-EnKF for estimating spatially-varying Manning's n coefficients used to define the bottom roughness in the Shallow Water Equations (SWEs) of a coastal ocean model. Observation System Simulation Experiments (OSSEs) are conducted using the ADvanced CIRCulation (ADCIRC) model, which solves a modified form of the Shallow Water Equations. A deterministic EnKF, the Singular Evolutive Interpolated Kalman (SEIK) filter, is used to estimate a vector of Manning's n coefficients defined at the model nodal points by assimilating synthetic water elevation data. It is found that with reasonable ensemble size (O (10)) , the filter's estimate converges to the reference Manning's field. To enhance performance, we have further reduced the dimension of the parameter search space through a Karhunen-Loéve (KL) expansion. We have also iterated on the filter update step to better account for the nonlinearity of the parameter estimation problem. We study the sensitivity of the system to the ensemble size, localization scale, dimension of retained KL modes, and number of iterations. The performance of the proposed framework in term of estimation accuracy suggests that a well-tuned Joint-EnKF provides a promising robust approach to infer spatially varying seabed roughness parameters in the context of coastal ocean modeling.

  17. Time-varying wing-twist improves aerodynamic efficiency of forward flight in butterflies.

    PubMed

    Zheng, Lingxiao; Hedrick, Tyson L; Mittal, Rajat

    2013-01-01

    Insect wings can undergo significant chordwise (camber) as well as spanwise (twist) deformation during flapping flight but the effect of these deformations is not well understood. The shape and size of butterfly wings leads to particularly large wing deformations, making them an ideal test case for investigation of these effects. Here we use computational models derived from experiments on free-flying butterflies to understand the effect of time-varying twist and camber on the aerodynamic performance of these insects. High-speed videogrammetry is used to capture the wing kinematics, including deformation, of a Painted Lady butterfly (Vanessa cardui) in untethered, forward flight. These experimental results are then analyzed computationally using a high-fidelity, three-dimensional, unsteady Navier-Stokes flow solver. For comparison to this case, a set of non-deforming, flat-plate wing (FPW) models of wing motion are synthesized and subjected to the same analysis along with a wing model that matches the time-varying wing-twist observed for the butterfly, but has no deformation in camber. The simulations show that the observed butterfly wing (OBW) outperforms all the flat-plate wings in terms of usable force production as well as the ratio of lift to power by at least 29% and 46%, respectively. This increase in efficiency of lift production is at least three-fold greater than reported for other insects. Interestingly, we also find that the twist-only-wing (TOW) model recovers much of the performance of the OBW, demonstrating that wing-twist, and not camber is key to forward flight in these insects. The implications of this on the design of flapping wing micro-aerial vehicles are discussed.

  18. Time-Varying Wing-Twist Improves Aerodynamic Efficiency of Forward Flight in Butterflies

    PubMed Central

    Zheng, Lingxiao; Hedrick, Tyson L.; Mittal, Rajat

    2013-01-01

    Insect wings can undergo significant chordwise (camber) as well as spanwise (twist) deformation during flapping flight but the effect of these deformations is not well understood. The shape and size of butterfly wings leads to particularly large wing deformations, making them an ideal test case for investigation of these effects. Here we use computational models derived from experiments on free-flying butterflies to understand the effect of time-varying twist and camber on the aerodynamic performance of these insects. High-speed videogrammetry is used to capture the wing kinematics, including deformation, of a Painted Lady butterfly (Vanessa cardui) in untethered, forward flight. These experimental results are then analyzed computationally using a high-fidelity, three-dimensional, unsteady Navier-Stokes flow solver. For comparison to this case, a set of non-deforming, flat-plate wing (FPW) models of wing motion are synthesized and subjected to the same analysis along with a wing model that matches the time-varying wing-twist observed for the butterfly, but has no deformation in camber. The simulations show that the observed butterfly wing (OBW) outperforms all the flat-plate wings in terms of usable force production as well as the ratio of lift to power by at least 29% and 46%, respectively. This increase in efficiency of lift production is at least three-fold greater than reported for other insects. Interestingly, we also find that the twist-only-wing (TOW) model recovers much of the performance of the OBW, demonstrating that wing-twist, and not camber is key to forward flight in these insects. The implications of this on the design of flapping wing micro-aerial vehicles are discussed. PMID:23341923

  19. Adiabatic regularization of the power spectrum in nonminimally coupled general single-field inflation

    NASA Astrophysics Data System (ADS)

    Alinea, Allan L.; Kubota, Takahiro

    2018-03-01

    We perform adiabatic regularization of power spectrum in nonminimally coupled general single-field inflation with varying speed of sound. The subtraction is performed within the framework of earlier study by Urakawa and Starobinsky dealing with the canonical inflation. Inspired by Fakir and Unruh's model on nonminimally coupled chaotic inflation, we find upon imposing near scale-invariant condition, that the subtraction term exponentially decays with the number of e -folds. As in the result for the canonical inflation, the regularized power spectrum tends to the "bare" power spectrum as the Universe expands during (and even after) inflation. This work justifies the use of the "bare" power spectrum in standard calculation in the most general context of slow-roll single-field inflation involving nonminimal coupling and varying speed of sound.

  20. Plasma properties in electron-bombardment ion thrusters

    NASA Technical Reports Server (NTRS)

    Matossian, J. N.; Beattie, J. R.

    1987-01-01

    The paper describes a technique for computing volume-averaged plasma properties within electron-bombardment ion thrusters, using spatially varying Langmuir-probe measurements. Average values of the electron densities are defined by integrating the spatially varying Maxwellian and primary electron densities over the ionization volume, and then dividing by the volume. Plasma properties obtained in the 30-cm-diameter J-series and ring-cusp thrusters are analyzed by the volume-averaging technique. The superior performance exhibited by the ring-cusp thruster is correlated with a higher average Maxwellian electron temperature. The ring-cusp thruster maintains the same fraction of primary electrons as does the J-series thruster, but at a much lower ion production cost. The volume-averaged predictions for both thrusters are compared with those of a detailed thruster performance model.

  1. Transonic high Reynolds number stability and control characteristics of a 0.015-scale remotely controlled elevon model (44-0) of the space shuttle orbiter tested in calspan 8-foot TWT (LA70)

    NASA Technical Reports Server (NTRS)

    Parrell, H.; Gamble, J. D.

    1977-01-01

    Transonic Wind Tunnel tests were run on a .015 scale model of the space shuttle orbiter vehicle in the 8-foot transonic wind tunnel. Purpose of the test program was to obtain basic shuttle aerodynamic data through a full range of elevon and aileron deflections, verification of data obtained at other facilities, and effects of Reynolds number. Tests were performed at Mach numbers from .35 to 1.20 and Reynolds numbers from 3,500,000 to 8,200,000 per foot. The high Reynolds number conditions (nominal 8,000,000/foot) were obtained using the ejector augmentation system. Angle of attack was varied from -2 to +20 degrees at sideslip angles of -2, 0, and +2 degrees. Sideslip was varied from -6 to +8 degrees at constant angles of attack from 0 to +20 degrees. Aileron settings were varied from -5 to +10 degrees at elevon deflections of -10, 0, and +10 degrees. Fixed aileron settings of 0 and 2 degrees in combination with various fixed elevon settings between -20 and +5 degrees were also run at varying angles of attack.

  2. Using Mathematical Modeling and Set-Based Design Principles to Recommend an Existing CVL Design

    DTIC Science & Technology

    2017-09-01

    designs, it would be worth researching the feasibility of varying the launch method on some of the larger light aircraft carriers, such as the Liaoning...thesis examines the trade space in major design areas such as tonnage, aircraft launch method , propulsion, and performance in order to illustrate...future conflict. This thesis examines the trade space in major design areas such as tonnage, aircraft launch method , propulsion, and performance in

  3. Evaluating Internal Model Strength and Performance of Myoelectric Prosthesis Control Strategies.

    PubMed

    Shehata, Ahmed W; Scheme, Erik J; Sensinger, Jonathon W

    2018-05-01

    On-going developments in myoelectric prosthesis control have provided prosthesis users with an assortment of control strategies that vary in reliability and performance. Many studies have focused on improving performance by providing feedback to the user but have overlooked the effect of this feedback on internal model development, which is key to improve long-term performance. In this paper, the strength of internal models developed for two commonly used myoelectric control strategies: raw control with raw feedback (using a regression-based approach) and filtered control with filtered feedback (using a classifier-based approach), were evaluated using two psychometric measures: trial-by-trial adaptation and just-noticeable difference. The performance of both strategies was also evaluated using Schmidt's style target acquisition task. Results obtained from 24 able-bodied subjects showed that although filtered control with filtered feedback had better short-term performance in path efficiency ( ), raw control with raw feedback resulted in stronger internal model development ( ), which may lead to better long-term performance. Despite inherent noise in the control signals of the regression controller, these findings suggest that rich feedback associated with regression control may be used to improve human understanding of the myoelectric control system.

  4. Damage evaluation by a guided wave-hidden Markov model based method

    NASA Astrophysics Data System (ADS)

    Mei, Hanfei; Yuan, Shenfang; Qiu, Lei; Zhang, Jinjin

    2016-02-01

    Guided wave based structural health monitoring has shown great potential in aerospace applications. However, one of the key challenges of practical engineering applications is the accurate interpretation of the guided wave signals under time-varying environmental and operational conditions. This paper presents a guided wave-hidden Markov model based method to improve the damage evaluation reliability of real aircraft structures under time-varying conditions. In the proposed approach, an HMM based unweighted moving average trend estimation method, which can capture the trend of damage propagation from the posterior probability obtained by HMM modeling is used to achieve a probabilistic evaluation of the structural damage. To validate the developed method, experiments are performed on a hole-edge crack specimen under fatigue loading condition and a real aircraft wing spar under changing structural boundary conditions. Experimental results show the advantage of the proposed method.

  5. Research on Multi Hydrological Models Applicability and Modelling Data Uncertainty Analysis for Flash Flood Simulation in Hilly Area

    NASA Astrophysics Data System (ADS)

    Ye, L.; Wu, J.; Wang, L.; Song, T.; Ji, R.

    2017-12-01

    Flooding in small-scale watershed in hilly area is characterized by short time periods and rapid rise and recession due to the complex underlying surfaces, various climate type and strong effect of human activities. It is almost impossible for a single hydrological model to describe the variation of flooding in both time and space accurately for all the catchments in hilly area because the hydrological characteristics can vary significantly among different catchments. In this study, we compare the performance of 5 hydrological models with varying degrees of complexity for simulation of flash flood for 14 small-scale watershed in China in order to find the relationship between the applicability of the hydrological models and the catchments characteristics. Meanwhile, given the fact that the hydrological data is sparse in hilly area, the effect of precipitation data, DEM resolution and their interference on the uncertainty of flood simulation is also illustrated. In general, the results showed that the distributed hydrological model (HEC-HMS in this study) performed better than the lumped hydrological models. Xinajiang and API models had good simulation for the humid catchments when long-term and continuous rainfall data is provided. Dahuofang model can simulate the flood peak well while the runoff generation module is relatively poor. In addition, the effect of diverse modelling data on the simulations is not simply superposed, and there is a complex interaction effect among different modelling data. Overall, both the catchment hydrological characteristics and modelling data situation should be taken into consideration in order to choose the suitable hydrological model for flood simulation for small-scale catchment in hilly area.

  6. Boys' and girls' weight status and math performance from kindergarten entry through fifth grade: a mediated analysis.

    PubMed

    Gable, Sara; Krull, Jennifer L; Chang, Yiting

    2012-01-01

    This study tests a mediated model of boys' and girls' weight status and math performance with 6,250 children from the Early Childhood Longitudinal Study. Five data points spanning kindergarten entry (mean age=68.46 months) through fifth grade (mean age=134.60 months) were analyzed. Three weight status groups were identified: persistent obesity, later onset obesity, and never obese. Multilevel models tested relations between weight status and math performance, weight status and interpersonal skills and internalizing behaviors, and interpersonal skills and internalizing behaviors and math performance. Interpersonal skills mediated the association between weight status and math performance for girls, and internalizing behaviors mediated the association between weight status and math performance for both sexes, with effects varying by group and time. © 2012 The Authors. Child Development © 2012 Society for Research in Child Development, Inc.

  7. Generating Performance Models for Irregular Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scalingmore » when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.« less

  8. Comparison of infiltration models in NIT Kurukshetra campus

    NASA Astrophysics Data System (ADS)

    Singh, Balraj; Sihag, Parveen; Singh, Karan

    2018-05-01

    The aim of the present investigation is to evaluate the performance of infiltration models used to calculate the infiltration rate of the soils. Ten different locations were chosen to measure the infiltration rate in NIT Kurukshetra. The instrument used for the experimentation was double ring infiltrometer. Some of the popular infiltration models like Horton's, Philip's, Modified Philip's and Green-Ampt were fitted with infiltration test data and performance of the models was determined using Nash-Sutcliffe efficiency (NSE), coefficient of correlation (C.C) and Root mean square error (RMSE) criteria. The result suggests that Modified Philip's model is the most accurate model where values of C.C, NSE and RMSE vary from 0.9947-0.9999, 0.9877-0.9998 to 0.1402-0.6913 (mm/h), respectively. Thus, this model can be used to synthetically produce infiltration data in the absence of infiltration data under the same conditions.

  9. Functional CAR models for large spatially correlated functional datasets.

    PubMed

    Zhang, Lin; Baladandayuthapani, Veerabhadran; Zhu, Hongxiao; Baggerly, Keith A; Majewski, Tadeusz; Czerniak, Bogdan A; Morris, Jeffrey S

    2016-01-01

    We develop a functional conditional autoregressive (CAR) model for spatially correlated data for which functions are collected on areal units of a lattice. Our model performs functional response regression while accounting for spatial correlations with potentially nonseparable and nonstationary covariance structure, in both the space and functional domains. We show theoretically that our construction leads to a CAR model at each functional location, with spatial covariance parameters varying and borrowing strength across the functional domain. Using basis transformation strategies, the nonseparable spatial-functional model is computationally scalable to enormous functional datasets, generalizable to different basis functions, and can be used on functions defined on higher dimensional domains such as images. Through simulation studies, we demonstrate that accounting for the spatial correlation in our modeling leads to improved functional regression performance. Applied to a high-throughput spatially correlated copy number dataset, the model identifies genetic markers not identified by comparable methods that ignore spatial correlations.

  10. Evaluating the precision of passive sampling methods using ...

    EPA Pesticide Factsheets

    To assess these models, four different thicknesses of low-density polyethylene (LDPE) passive samplers were co-deployed for 28 days in the water column at three sites in New Bedford Harbor, MA, USA. Each sampler was pre-loaded with six PCB performance reference compounds (PRCs) to assess equilibrium status, such that the percent of PRC lost would range depending on PRC and LDPE thickness. These data allow subsequent Cfree comparisons to be made in two ways: (1) comparing Cfree derived from one thickness using different models and (2) comparing Cfree derived from the same model using different thicknesses of LDPE. Following the deployments, the percent of PRC lost ranged from 0-100%. As expected, fractional equilibrium decreased with increasing PRC molecular weight as well as sampler thickness. Overall, a total of 27 PCBs (log KOW ranging from 5.07 – 8.09) were measured at Cfree concentrations varying from 0.05 pg/L (PCB 206) to about 200 ng/L (PCB 28) on a single LDPE sampler. Relative standard deviations (RSDs) for total PCB measurements using the same thickness and varying model types range from 0.04-12% and increased with sampler thickness. Total PCB RSD for measurements using the same model and varying thickness ranged from: 6 – 30%. No RSD trends between models were observed but RSD did increase as Cfree decreased. These findings indicate that existing models yield precise and reproducible results when using LDPE and PRCs to measure Cfree. This work in

  11. The Role of Surface Energy Exchange for Simulating Wind Inflow: An Evaluation of Multiple Land Surface Models in WRF for the Southern Great Plains Site Field Campaign Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wharton, Sonia; Simpson, Matthew; Osuna, Jessica

    The Weather Research and Forecasting (WRF) model is used to investigate choice of land surface model (LSM) on the near-surface wind profile, including heights reached by multi-megawatt wind turbines. Simulations of wind profiles and surface energy fluxes were made using five LSMs of varying degrees of sophistication in dealing with soil-plant-atmosphere feedbacks for the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility’s Southern Great Plains (SGP) Central Facility in Oklahoma. Surface-flux and wind-profile measurements were available for validation. The WRF model was run for three two-week periods during which varying canopy and meteorological conditions existed. Themore » LSMs predicted a wide range of energy-flux and wind-shear magnitudes even during the cool autumn period when we expected less variability. Simulations of energy fluxes varied in accuracy by model sophistication, whereby LSMs with very simple or no soil-plant-atmosphere feedbacks were the least accurate; however, the most complex models did not consistently produce more accurate results. Errors in wind shear also were sensitive to LSM choice and were partially related to the accuracy of energy flux data. The variability of LSM performance was relatively high, suggesting that LSM representation of energy fluxes in the WRF model remains a significant source of uncertainty for simulating wind turbine inflow conditions.« less

  12. Weather model performance on extreme rainfall events simulation's over Western Iberian Peninsula

    NASA Astrophysics Data System (ADS)

    Pereira, S. C.; Carvalho, A. C.; Ferreira, J.; Nunes, J. P.; Kaiser, J. J.; Rocha, A.

    2012-08-01

    This study evaluates the performance of the WRF-ARW numerical weather model in simulating the spatial and temporal patterns of an extreme rainfall period over a complex orographic region in north-central Portugal. The analysis was performed for the December month of 2009, during the Portugal Mainland rainy season. The heavy rainfall to extreme heavy rainfall periods were due to several low surface pressure's systems associated with frontal surfaces. The total amount of precipitation for December exceeded, in average, the climatological mean for the 1971-2000 time period in +89 mm, varying from 190 mm (south part of the country) to 1175 mm (north part of the country). Three model runs were conducted to assess possible improvements in model performance: (1) the WRF-ARW is forced with the initial fields from a global domain model (RunRef); (2) data assimilation for a specific location (RunObsN) is included; (3) nudging is used to adjust the analysis field (RunGridN). Model performance was evaluated against an observed hourly precipitation dataset of 15 rainfall stations using several statistical parameters. The WRF-ARW model reproduced well the temporal rainfall patterns but tended to overestimate precipitation amounts. The RunGridN simulation provided the best results but model performance of the other two runs was good too, so that the selected extreme rainfall episode was successfully reproduced.

  13. Biogeography-based combinatorial strategy for efficient autonomous underwater vehicle motion planning and task-time management

    NASA Astrophysics Data System (ADS)

    Zadeh, S. M.; Powers, D. M. W.; Sammut, K.; Yazdani, A. M.

    2016-12-01

    Autonomous Underwater Vehicles (AUVs) are capable of spending long periods of time for carrying out various underwater missions and marine tasks. In this paper, a novel conflict-free motion planning framework is introduced to enhance underwater vehicle's mission performance by completing maximum number of highest priority tasks in a limited time through a large scale waypoint cluttered operating field, and ensuring safe deployment during the mission. The proposed combinatorial route-path planner model takes the advantages of the Biogeography-Based Optimization (BBO) algorithm toward satisfying objectives of both higher-lower level motion planners and guarantees maximization of the mission productivity for a single vehicle operation. The performance of the model is investigated under different scenarios including the particular cost constraints in time-varying operating fields. To show the reliability of the proposed model, performance of each motion planner assessed separately and then statistical analysis is undertaken to evaluate the total performance of the entire model. The simulation results indicate the stability of the contributed model and its feasible application for real experiments.

  14. Development and Integration of an Advanced Stirling Convertor Linear Alternator Model for a Tool Simulating Convertor Performance and Creating Phasor Diagrams

    NASA Technical Reports Server (NTRS)

    Metscher, Jonathan F.; Lewandowski, Edward J.

    2013-01-01

    A simple model of the Advanced Stirling Convertors (ASC) linear alternator and an AC bus controller has been developed and combined with a previously developed thermodynamic model of the convertor for a more complete simulation and analysis of the system performance. The model was developed using Sage, a 1-D thermodynamic modeling program that now includes electro-magnetic components. The convertor, consisting of a free-piston Stirling engine combined with a linear alternator, has sufficiently sinusoidal steady-state behavior to allow for phasor analysis of the forces and voltages acting in the system. A MATLAB graphical user interface (GUI) has been developed to interface with the Sage software for simplified use of the ASC model, calculation of forces, and automated creation of phasor diagrams. The GUI allows the user to vary convertor parameters while fixing different input or output parameters and observe the effect on the phasor diagrams or system performance. The new ASC model and GUI help create a better understanding of the relationship between the electrical component voltages and mechanical forces. This allows better insight into the overall convertor dynamics and performance.

  15. Assessment and Improvement of GOCE based Global Geopotential Models Using Wavelet Decomposition

    NASA Astrophysics Data System (ADS)

    Erol, Serdar; Erol, Bihter; Serkan Isik, Mustafa

    2016-07-01

    The contribution of recent Earth gravity field satellite missions, specifically GOCE mission, leads significant improvement in quality of gravity field models in both accuracy and resolution manners. However the performance and quality of each released model vary not only depending on the spatial location of the Earth but also the different bands of the spectral expansion. Therefore the assessment of the global model performances with validations using in situ-data in varying territories on the Earth is essential for clarifying their exact performances in local. Beside of this, their spectral evaluation and quality assessment of the signal in each part of the spherical harmonic expansion spectrum is essential to have a clear decision for the commission error content of the model and determining its optimal degree, revealed the best results, as well. The later analyses provide also a perspective and comparison on the global behavior of the models and opportunity to report the sequential improvement of the models depending on the mission developments and hence the contribution of the new data of missions. In this study a review on spectral assessment results of the recently released GOCE based global geopotential models DIR-R5, TIM-R5 with the enhancement using EGM2008, as reference model, in Turkey, versus the terrestrial data is provided. Beside of reporting the GOCE mission contribution to the models in Turkish territory, the possible improvement in the spectral quality of these models, via decomposition that are highly contaminated by noise, is purposed. In the analyses the motivation is on achieving an optimal amount of improvement that rely on conserving the useful component of the GOCE signal as much as possible, while fusing the filtered GOCE based models with EGM2008 in the appropriate spectral bands. The investigation also contain the assessment of the coherence and the correlation between the Earth gravity field parameters (free-air gravity anomalies and geoid undulations), derived from the validated geopotential models and terrestrial data (GPS/leveling, terrestrial gravity observations, DTM etc.), as well as the WGM2012 products. In the conclusion, with the numerical results, the performance of the assessed models are clarified in Turkish territory and the potential of the Wavelet decomposition in the improvement of the geopotential models is verified.

  16. Three-dimensional hysteresis compensation enhances accuracy of robotic artificial muscles

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Simeonov, Anthony; Yip, Michael C.

    2018-03-01

    Robotic artificial muscles are compliant and can generate straight contractions. They are increasingly popular as driving mechanisms for robotic systems. However, their strain and tension force often vary simultaneously under varying loads and inputs, resulting in three-dimensional hysteretic relationships. The three-dimensional hysteresis in robotic artificial muscles poses difficulties in estimating how they work and how to make them perform designed motions. This study proposes an approach to driving robotic artificial muscles to generate designed motions and forces by modeling and compensating for their three-dimensional hysteresis. The proposed scheme captures the nonlinearity by embedding two hysteresis models. The effectiveness of the model is confirmed by testing three popular robotic artificial muscles. Inverting the proposed model allows us to compensate for the hysteresis among temperature surrogate, contraction length, and tension force of a shape memory alloy (SMA) actuator. Feedforward control of an SMA-actuated robotic bicep is demonstrated. This study can be generalized to other robotic artificial muscles, thus enabling muscle-powered machines to generate desired motions.

  17. Preliminary 2-D shell analysis of the space shuttle solid rocket boosters

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Gillian, Ronnie E.; Nemeth, Michael P.

    1987-01-01

    A two-dimensional shell model of an entire solid rocket booster (SRB) has been developed using the STAGSC-1 computer code and executed on the Ames CRAY computer. The purpose of these analyses is to calculate the overall deflection and stress distributions for the SRB when subjected to mechanical loads corresponding to critical times during the launch sequence. The mechanical loading conditions for the full SRB arise from the external tank (ET) attachment points, the solid rocket motor (SRM) pressure load, and the SRB hold down posts. The ET strut loads vary with time after the Space Shuttle main engine (SSME) ignition. The SRM internal pressure varies axially by approximately 100 psi. Static analyses of the full SRB are performed using a snapshot picture of the loads. The field and factory joints are modeled by using equivalent stiffness joints instead of detailed models of the joint. As such, local joint behavior cannot be obtained from this global model.

  18. Do repeated assessments of performance status improve predictions for risk of death among patients with cancer? A population-based cohort study.

    PubMed

    Su, Jiandong; Barbera, Lisa; Sutradhar, Rinku

    2015-06-01

    Prior work has utilized longitudinal information on performance status to demonstrate its association with risk of death among cancer patients; however, no study has assessed whether such longitudinal information improves the predictions for risk of death. To examine whether the use of repeated performance status assessments improve predictions for risk of death compared to using only performance status assessment at the time of cancer diagnosis. This was a population-based longitudinal study of adult outpatients who had a cancer diagnosis and had at least one assessment of performance status. To account for each patient's changing performance status over time, we implemented a Cox model with a time-varying covariate for performance status. This model was compared to a Cox model using only a time-fixed (baseline) covariate for performance status. The regression coefficients of each model were derived based on a randomly selected 60% of patients, and then, the predictive ability of each model was assessed via concordance probabilities when applied to the remaining 40% of patients. Our study consisted of 15,487 cancer patients with over 53,000 performance status assessments. The utilization of repeated performance status assessments improved predictions for risk of death compared to using only the performance status assessment taken at diagnosis. When studying the hazard of death among patients with cancer, if available, researchers should incorporate changing information on performance status scores, instead of simply baseline information on performance status. © The Author(s) 2015.

  19. A new framework to enhance the interpretation of external validation studies of clinical prediction models.

    PubMed

    Debray, Thomas P A; Vergouwe, Yvonne; Koffijberg, Hendrik; Nieboer, Daan; Steyerberg, Ewout W; Moons, Karel G M

    2015-03-01

    It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from "different but related" samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Independent external validation of predictive models for urinary dysfunction following external beam radiotherapy of the prostate: Issues in model development and reporting.

    PubMed

    Yahya, Noorazrul; Ebert, Martin A; Bulsara, Max; Kennedy, Angel; Joseph, David J; Denham, James W

    2016-08-01

    Most predictive models are not sufficiently validated for prospective use. We performed independent external validation of published predictive models for urinary dysfunctions following radiotherapy of the prostate. Multivariable models developed to predict atomised and generalised urinary symptoms, both acute and late, were considered for validation using a dataset representing 754 participants from the TROG 03.04-RADAR trial. Endpoints and features were harmonised to match the predictive models. The overall performance, calibration and discrimination were assessed. 14 models from four publications were validated. The discrimination of the predictive models in an independent external validation cohort, measured using the area under the receiver operating characteristic (ROC) curve, ranged from 0.473 to 0.695, generally lower than in internal validation. 4 models had ROC >0.6. Shrinkage was required for all predictive models' coefficients ranging from -0.309 (prediction probability was inverse to observed proportion) to 0.823. Predictive models which include baseline symptoms as a feature produced the highest discrimination. Two models produced a predicted probability of 0 and 1 for all patients. Predictive models vary in performance and transferability illustrating the need for improvements in model development and reporting. Several models showed reasonable potential but efforts should be increased to improve performance. Baseline symptoms should always be considered as potential features for predictive models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    PubMed

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed framework should usefully inform guidelines for preparing submissions to reimbursement bodies.

  2. Estimation of stochastic volatility by using Ornstein-Uhlenbeck type models

    NASA Astrophysics Data System (ADS)

    Mariani, Maria C.; Bhuiyan, Md Al Masum; Tweneboah, Osei K.

    2018-02-01

    In this study, we develop a technique for estimating the stochastic volatility (SV) of a financial time series by using Ornstein-Uhlenbeck type models. Using the daily closing prices from developed and emergent stock markets, we conclude that the incorporation of stochastic volatility into the time varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. Furthermore, our estimation algorithm is feasible with large data sets and have good convergence properties.

  3. Application of the Analog Method to Modelling Heat Waves: A Case Study with Power Transformers

    DTIC Science & Technology

    2017-04-21

    UNCLASSIFIED Massachusetts Institute of Technology Lincoln Laboratory APPLICATION OF THE ANALOG METHOD TO MODELLING HEAT WAVES: A CASE STUDY WITH...18 2 Calibration and validation statistics with the use of five atmospheric vari- ables to construct analogue diagnostics for JJA of transformer T2...electrical grid as a series of nodes (transformers) and edges (transmission lines) so that basic mathematical anal- ysis can be performed. The mathematics

  4. Performance dependence of hybrid x-ray computed tomography/fluorescence molecular tomography on the optical forward problem.

    PubMed

    Hyde, Damon; Schulz, Ralf; Brooks, Dana; Miller, Eric; Ntziachristos, Vasilis

    2009-04-01

    Hybrid imaging systems combining x-ray computed tomography (CT) and fluorescence tomography can improve fluorescence imaging performance by incorporating anatomical x-ray CT information into the optical inversion problem. While the use of image priors has been investigated in the past, little is known about the optimal use of forward photon propagation models in hybrid optical systems. In this paper, we explore the impact on reconstruction accuracy of the use of propagation models of varying complexity, specifically in the context of these hybrid imaging systems where significant structural information is known a priori. Our results demonstrate that the use of generically known parameters provides near optimal performance, even when parameter mismatch remains.

  5. Value-at-Risk forecasts by a spatiotemporal model in Chinese stock market

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Weng, Yingliang

    2016-01-01

    This paper generalizes a recently proposed spatial autoregressive model and introduces a spatiotemporal model for forecasting stock returns. We support the view that stock returns are affected not only by the absolute values of factors such as firm size, book-to-market ratio and momentum but also by the relative values of factors like trading volume ranking and market capitalization ranking in each period. This article studies a new method for constructing stocks' reference groups; the method is called quartile method. Applying the method empirically to the Shanghai Stock Exchange 50 Index, we compare the daily volatility forecasting performance and the out-of-sample forecasting performance of Value-at-Risk (VaR) estimated by different models. The empirical results show that the spatiotemporal model performs surprisingly well in terms of capturing spatial dependences among individual stocks, and it produces more accurate VaR forecasts than the other three models introduced in the previous literature. Moreover, the findings indicate that both allowing for serial correlation in the disturbances and using time-varying spatial weight matrices can greatly improve the predictive accuracy of a spatial autoregressive model.

  6. Dynamic load-sharing characteristic analysis of face gear power-split gear system based on tooth contact characteristics

    NASA Astrophysics Data System (ADS)

    Dong, Hao; Hu, Yahui

    2018-04-01

    The bend-torsion coupling dynamics load-sharing model of the helicopter face gear split torque transmission system is established by using concentrated quality standard, to analyzing the dynamic load-sharing characteristic. The mathematical models include nonlinear support stiffness, time-varying meshing stiffness, damping, gear backlash. The results showed that the errors collectively influenced the load sharing characteristics, only reduce a certain error, it is never fully reached the perfect loading sharing characteristics. The system load-sharing performance can be improved through floating shaft support. The above-method will provide a theoretical basis and data support for its dynamic performance optimization design.

  7. Simulation of Cold Flow in a Truncated Ideal Nozzle with Film Cooling

    NASA Technical Reports Server (NTRS)

    Braman, Kalen; Ruf, Joseph

    2015-01-01

    Flow transients during rocket start-up and shut-down can lead to significant side loads on rocket nozzles. The capability to estimate these side loads computationally can streamline the nozzle design process. Towards this goal, the flow in a truncated ideal contour (TIC) nozzle has been simulated for a range of nozzle pressure ratios (NPRs) aimed to match a series of cold flow experiments performed at the NASA MSFC Nozzle Test Facility. These simulations were performed with varying turbulence model choices and with four different versions of the TIC nozzle model geometry, each of which was created with a different simplification to the test article geometry.

  8. Higher sensitivity and lower specificity in post-fire mortality model validation of 11 western US tree species

    USGS Publications Warehouse

    Kane, Jeffrey M.; van Mantgem, Phillip J.; Lalemand, Laura; Keifer, MaryBeth

    2017-01-01

    Managers require accurate models to predict post-fire tree mortality to plan prescribed fire treatments and examine their effectiveness. Here we assess the performance of a common post-fire tree mortality model with an independent dataset of 11 tree species from 13 National Park Service units in the western USA. Overall model discrimination was generally strong, but performance varied considerably among species and sites. The model tended to have higher sensitivity (proportion of correctly classified dead trees) and lower specificity (proportion of correctly classified live trees) for many species, indicating an overestimation of mortality. Variation in model accuracy (percentage of live and dead trees correctly classified) among species was not related to sample size or percentage observed mortality. However, we observed a positive relationship between specificity and a species-specific bark thickness multiplier, indicating that overestimation was more common in thin-barked species. Accuracy was also quite low for thinner bark classes (<1 cm) for many species, leading to poorer model performance. Our results indicate that a common post-fire mortality model generally performs well across a range of species and sites; however, some thin-barked species and size classes would benefit from further refinement to improve model specificity.

  9. Increased Learning Time under Stimulus-Funded School Improvement Grants: High Hopes, Varied Implementation

    ERIC Educational Resources Information Center

    McMurrer, Jennifer

    2012-01-01

    Research has long suggested that significantly increasing quality time in school for teaching and learning can have a positive impact on student achievement. Recognizing this connection, federal guidance requires low-performing schools to increase student learning time if they are implementing two popular reform models using school improvement…

  10. Message Integrity Model for Wireless Sensor Networks

    ERIC Educational Resources Information Center

    Qleibo, Haider W.

    2009-01-01

    WSNs are susceptible to a variety of attacks. These attacks vary in the way they are performed and executed; they include but not limited to node capture, physical tampering, denial of service, and message alteration. It is of paramount importance to protect gathered data by WSNs and defend the network against illegal access and malicious…

  11. Robust control of combustion instabilities

    NASA Astrophysics Data System (ADS)

    Hong, Boe-Shong

    Several interactive dynamical subsystems, each of which has its own time-scale and physical significance, are decomposed to build a feedback-controlled combustion- fluid robust dynamics. On the fast-time scale, the phenomenon of combustion instability is corresponding to the internal feedback of two subsystems: acoustic dynamics and flame dynamics, which are parametrically dependent on the slow-time-scale mean-flow dynamics controlled for global performance by a mean-flow controller. This dissertation constructs such a control system, through modeling, analysis and synthesis, to deal with model uncertainties, environmental noises and time- varying mean-flow operation. Conservation law is decomposed as fast-time acoustic dynamics and slow-time mean-flow dynamics, served for synthesizing LPV (linear parameter varying)- L2-gain robust control law, in which a robust observer is embedded for estimating and controlling the internal status, while achieving trade- offs among robustness, performances and operation. The robust controller is formulated as two LPV-type Linear Matrix Inequalities (LMIs), whose numerical solver is developed by finite-element method. Some important issues related to physical understanding and engineering application are discussed in simulated results of the control system.

  12. A contrast-sensitive channelized-Hotelling observer to predict human performance in a detection task using lumpy backgrounds and Gaussian signals

    NASA Astrophysics Data System (ADS)

    Park, Subok; Badano, Aldo; Gallas, Brandon D.; Myers, Kyle J.

    2007-03-01

    Previously, a non-prewhitening matched filter (NPWMF) incorporating a model for the contrast sensitivity of the human visual system was introduced for modeling human performance in detection tasks with different viewing angles and white-noise backgrounds by Badano et al. But NPWMF observers do not perform well detection tasks involving complex backgrounds since they do not account for random backgrounds. A channelized-Hotelling observer (CHO) using difference-of-Gaussians (DOG) channels has been shown to track human performance well in detection tasks using lumpy backgrounds. In this work, a CHO with DOG channels, incorporating the model of the human contrast sensitivity, was developed similarly. We call this new observer a contrast-sensitive CHO (CS-CHO). The Barten model was the basis of our human contrast sensitivity model. A scalar was multiplied to the Barten model and varied to control the thresholding effect of the contrast sensitivity on luminance-valued images and hence the performance-prediction ability of the CS-CHO. The performance of the CS-CHO was compared to the average human performance from the psychophysical study by Park et al., where the task was to detect a known Gaussian signal in non-Gaussian distributed lumpy backgrounds. Six different signal-intensity values were used in this study. We chose the free parameter of our model to match the mean human performance in the detection experiment at the strongest signal intensity. Then we compared the model to the human at five different signal-intensity values in order to see if the performance of the CS-CHO matched human performance. Our results indicate that the CS-CHO with the chosen scalar for the contrast sensitivity predicts human performance closely as a function of signal intensity.

  13. Accounting for Time-Varying Confounding in the Relationship Between Obesity and Coronary Heart Disease: Analysis With G-Estimation: The ARIC Study.

    PubMed

    Shakiba, Maryam; Mansournia, Mohammad Ali; Salari, Arsalan; Soori, Hamid; Mansournia, Nasrin; Kaufman, Jay S

    2018-06-01

    In longitudinal studies, standard analysis may yield biased estimates of exposure effect in the presence of time-varying confounders that are also intermediate variables. We aimed to quantify the relationship between obesity and coronary heart disease (CHD) by appropriately adjusting for time-varying confounders. This study was performed in a subset of participants from the Atherosclerosis Risk in Communities (ARIC) Study (1987-2010), a US study designed to investigate risk factors for atherosclerosis. General obesity was defined as body mass index (weight (kg)/height (m)2) ≥30, and abdominal obesity (AOB) was defined according to either waist circumference (≥102 cm in men and ≥88 cm in women) or waist:hip ratio (≥0.9 in men and ≥0.85 in women). The association of obesity with CHD was estimated by G-estimation and compared with results from accelerated failure-time models using 3 specifications. The first model, which adjusted for baseline covariates, excluding metabolic mediators of obesity, showed increased risk of CHD for all obesity measures. Further adjustment for metabolic mediators in the second model and time-varying variables in the third model produced negligible changes in the hazard ratios. The hazard ratios estimated by G-estimation were 1.15 (95% confidence interval (CI): 0.83, 1.47) for general obesity, 1.65 (95% CI: 1.35, 1.92) for AOB based on waist circumference, and 1.38 (95% CI: 1.13, 1.99) for AOB based on waist:hip ratio, suggesting that AOB increased the risk of CHD. The G-estimated hazard ratios for both measures were further from the null than those derived from standard models.

  14. Mendelian randomization analysis of a time-varying exposure for binary disease outcomes using functional data analysis methods.

    PubMed

    Cao, Ying; Rajan, Suja S; Wei, Peng

    2016-12-01

    A Mendelian randomization (MR) analysis is performed to analyze the causal effect of an exposure variable on a disease outcome in observational studies, by using genetic variants that affect the disease outcome only through the exposure variable. This method has recently gained popularity among epidemiologists given the success of genetic association studies. Many exposure variables of interest in epidemiological studies are time varying, for example, body mass index (BMI). Although longitudinal data have been collected in many cohort studies, current MR studies only use one measurement of a time-varying exposure variable, which cannot adequately capture the long-term time-varying information. We propose using the functional principal component analysis method to recover the underlying individual trajectory of the time-varying exposure from the sparsely and irregularly observed longitudinal data, and then conduct MR analysis using the recovered curves. We further propose two MR analysis methods. The first assumes a cumulative effect of the time-varying exposure variable on the disease risk, while the second assumes a time-varying genetic effect and employs functional regression models. We focus on statistical testing for a causal effect. Our simulation studies mimicking the real data show that the proposed functional data analysis based methods incorporating longitudinal data have substantial power gains compared to standard MR analysis using only one measurement. We used the Framingham Heart Study data to demonstrate the promising performance of the new methods as well as inconsistent results produced by the standard MR analysis that relies on a single measurement of the exposure at some arbitrary time point. © 2016 WILEY PERIODICALS, INC.

  15. Green-Ampt approximations: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheinker, Alexander

    A recently developed form of extremum seeking for time-varying systems is implemented in hardware for the resonance control of radio-frequency cavities without phase measurements. Normal conducting RF cavity resonance control is performed via a slug tuner, while superconducting TESLA-type cavity resonance control is performed via piezo actuators. The controller maintains resonance by minimizing reflected power by utilizing model-independent adaptive feedback. Unlike standard phase-measurement-based resonance control, the presented approach is not sensitive to arbitrary phase shifts of the RF signals due to temperature-dependent cable length or phasemeasurement hardware changes. The phase independence of this method removes common slowly varying drifts andmore » required periodic recalibration of phase-based methods. A general overview of the adaptive controller is presented along with the proof of principle experimental results at room temperature. Lastly, this method allows us to both maintain a cavity at a desired resonance frequency and also to dynamically modify its resonance frequency to track the unknown time-varying frequency of an RF source, thereby maintaining maximal cavity field strength, based only on power-level measurements.« less

  17. Minimization of betatron oscillations of electron beam injected into a time-varying lattice via extremum seeking

    DOE PAGES

    Scheinker, Alexander; Huang, Xiaobiao; Wu, Juhao

    2017-02-20

    Here, we report on a beam-based experiment performed at the SPEAR3 storage ring of the Stanford Synchrotron Radiation Lightsource at the SLAC National Accelerator Laboratory, in which a model-independent extremum-seeking optimization algorithm was utilized to minimize betatron oscillations in the presence of a time-varying kicker magnetic field, by automatically tuning the pulsewidth, voltage, and delay of two other kicker magnets, and the current of two skew quadrupole magnets, simultaneously, in order to optimize injection kick matching. Adaptive tuning was performed on eight parameters simultaneously. The scheme was able to continuously maintain the match of a five-magnet lattice while the fieldmore » strength of a kicker magnet was continuously varied at a rate much higher (±6% sinusoidal voltage change over 1.5 h) than typically experienced in operation. Lastly, the ability to quickly tune or compensate for time variation of coupled components, as demonstrated here, is very important for the more general, more difficult problem of global accelerator tuning to quickly switch between various experimental setups.« less

  18. Numerical Analysis of the Heat Transfer Characteristics within an Evaporating Meniscus

    NASA Astrophysics Data System (ADS)

    Ball, Gregory

    A numerical analysis was performed as to investigate the heat transfer characteristics of an evaporating thin-film meniscus. A mathematical model was used in the formulation of a third order ordinary differential equation. This equation governs the evaporating thin-film through use of continuity, momentum, energy equations and the Kelvin-Clapeyron model. This governing equation was treated as an initial value problem and was solved numerically using a Runge-Kutta technique. The numerical model uses varying thermophysical properties and boundary conditions such as channel width, applied superheat, accommodation coefficient and working fluid which can be tailored by the user. This work focused mainly on the effects of altering accommodation coefficient and applied superheat. A unified solution is also presented which models the meniscus to half channel width. The model was validated through comparison to literature values. In varying input values the following was determined; increasing superheat was found to shorten the film thickness and greatly increase the interfacial curvature overshoot values. The effect of decreasing accommodation coefficient lengthened the thin-film and retarded the evaporative effects.

  19. Decoding the non-stationary neuron spike trains by dual Monte Carlo point process estimation in motor Brain Machine Interfaces.

    PubMed

    Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang

    2014-01-01

    Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.

  20. Aerodynamic Performance of a 0.27-Scale Model of an AH-64 Helicopter with Baseline and Alternate Rotor Blade Sets

    NASA Technical Reports Server (NTRS)

    Kelley, Henry L.

    1990-01-01

    Performance of a 27 percent scale model rotor designed for the AH-64 helicopter (alternate rotor) was measured in hover and forward flight and compared against and AH-64 baseline rotor model. Thrust, rotor tip Mach number, advance ratio, and ground proximity were varied. In hover, at a nominal thrust coefficient of 0.0064, the power savings was about 6.4 percent for the alternate rotor compared to the baseline. The corresponding thrust increase at this condition was approx. 4.5 percent which represents an equivalent full scale increase in lift capability of about 660 lbs. Comparable results were noted in forward flight except for the high thrust, high speed cases investigated where the baseline rotor was slightly superior. Reduced performance at the higher thrusts and speeds was likely due to Reynolds number effects and blade elasticity differences.

  1. Performance simulation of the JPL solar-powered distiller. Part 1: Quasi-steady-state conditions. [for cooling microwave equipment

    NASA Technical Reports Server (NTRS)

    Yung, C. S.; Lansing, F. L.

    1983-01-01

    A 37.85 cu m (10,000 gallons) per year (nominal) passive solar powered water distillation system was installed and is operational in the Venus Deep Space Station. The system replaced an old, electrically powered water distiller. The distilled water produced with its high electrical resistivity is used to cool the sensitive microwave equipment. A detailed thermal model was developed to simulate the performance of the distiller and study its sensitivity under varying environment and load conditions. The quasi-steady state portion of the model is presented together with the formulas for heat and mass transfer coefficients used. Initial results indicated that a daily water evaporation efficiency of 30% can be achieved. A comparison made between a full day performance simulation and the actual field measurements gave good agreement between theory and experiment, which verified the model.

  2. Predicting ready biodegradability of premanufacture notice chemicals.

    PubMed

    Boethling, Robert S; Lynch, David G; Thom, Gary C

    2003-04-01

    Chemical substances other than pesticides, drugs, and food additives are regulated by the U.S. Environmental Protection Agency (U.S. EPA) under the Toxic Substances Control Act (TSCA), but the United States does not require that new substances be tested automatically for such critical properties as biodegradability. The resulting lack of submitted data has fostered the development of estimation methods, and the BioWIN models for predicting biodegradability from chemical structure have played a prominent role in premanufacture notice (PMN) review. Until now, validation efforts have used only the Japanese Ministry of International Trade and Industry (MITI) test data and have not included all models. To assess BioWIN performance with PMN substances, we assembled a database of PMNs for which ready biodegradation data had been submitted over the period 1995 through 2001. The 305 PMN structures are highly varied and pose major challenges to chemical property estimation. Despite the variability of ready biodegradation tests, the use of at least six different test methods, and widely varying quality of submitted data, accuracy of four of six BioWIN models (MITI linear, MITI nonlinear, survey ultimate, survey primary) was in the 80+% range for predicting ready biodegradability. Greater accuracy (>90%) can be achieved by using model estimates only when the four models agree (true for 3/4 of the PMNs). The BioWIN linear and nonlinear probability models did not perform as well even when classification criteria were optimized. The results suggest that the MITI and survey BioWIN models are suitable for use in screening-level applications.

  3. Utilization-Based Modeling and Optimization for Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    Liu, Yanbing; Huang, Jun; Liu, Zhangxiong

    The cognitive radio technique promises to manage and allocate the scarce radio spectrum in the highly varying and disparate modern environments. This paper considers a cognitive radio scenario composed of two queues for the primary (licensed) users and cognitive (unlicensed) users. According to the Markov process, the system state equations are derived and an optimization model for the system is proposed. Next, the system performance is evaluated by calculations which show the rationality of our system model. Furthermore, discussions among different parameters for the system are presented based on the experimental results.

  4. Connected word recognition using a cascaded neuro-computational model

    NASA Astrophysics Data System (ADS)

    Hoya, Tetsuya; van Leeuwen, Cees

    2016-10-01

    We propose a novel framework for processing a continuous speech stream that contains a varying number of words, as well as non-speech periods. Speech samples are segmented into word-tokens and non-speech periods. An augmented version of an earlier-proposed, cascaded neuro-computational model is used for recognising individual words within the stream. Simulation studies using both a multi-speaker-dependent and speaker-independent digit string database show that the proposed method yields a recognition performance comparable to that obtained by a benchmark approach using hidden Markov models with embedded training.

  5. Geospace Environment Modeling 2008-2009 Challenge: Ground Magnetic Field Perturbations

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Kuznetsova, M.; Ridley, A.; Raeder, J.; Vapirev, A.; Weimer, D.; Weigel, R. S.; Wiltberger, M.; Millward, G.; Rastatter, L.; hide

    2011-01-01

    Acquiring quantitative metrics!based knowledge about the performance of various space physics modeling approaches is central for the space weather community. Quantification of the performance helps the users of the modeling products to better understand the capabilities of the models and to choose the approach that best suits their specific needs. Further, metrics!based analyses are important for addressing the differences between various modeling approaches and for measuring and guiding the progress in the field. In this paper, the metrics!based results of the ground magnetic field perturbation part of the Geospace Environment Modeling 2008 2009 Challenge are reported. Predictions made by 14 different models, including an ensemble model, are compared to geomagnetic observatory recordings from 12 different northern hemispheric locations. Five different metrics are used to quantify the model performances for four storm events. It is shown that the ranking of the models is strongly dependent on the type of metric used to evaluate the model performance. None of the models rank near or at the top systematically for all used metrics. Consequently, one cannot pick the absolute winner : the choice for the best model depends on the characteristics of the signal one is interested in. Model performances vary also from event to event. This is particularly clear for root!mean!square difference and utility metric!based analyses. Further, analyses indicate that for some of the models, increasing the global magnetohydrodynamic model spatial resolution and the inclusion of the ring current dynamics improve the models capability to generate more realistic ground magnetic field fluctuations.

  6. Description and availability of the SMARTS spectral model for photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Myers, Daryl R.; Gueymard, Christian A.

    2004-11-01

    Limited spectral response range of photocoltaic (PV) devices requires device performance be characterized with respect to widely varying terrestrial solar spectra. The FORTRAN code "Simple Model for Atmospheric Transmission of Sunshine" (SMARTS) was developed for various clear-sky solar renewable energy applications. The model is partly based on parameterizations of transmittance functions in the MODTRAN/LOWTRAN band model family of radiative transfer codes. SMARTS computes spectra with a resolution of 0.5 nanometers (nm) below 400 nm, 1.0 nm from 400 nm to 1700 nm, and 5 nm from 1700 nm to 4000 nm. Fewer than 20 input parameters are required to compute spectral irradiance distributions including spectral direct beam, total, and diffuse hemispherical radiation, and up to 30 other spectral parameters. A spreadsheet-based graphical user interface can be used to simplify the construction of input files for the model. The model is the basis for new terrestrial reference spectra developed by the American Society for Testing and Materials (ASTM) for photovoltaic and materials degradation applications. We describe the model accuracy, functionality, and the availability of source and executable code. Applications to PV rating and efficiency and the combined effects of spectral selectivity and varying atmospheric conditions are briefly discussed.

  7. Changes in radiation dose with variations in human anatomy: larger and smaller normal-stature adults.

    PubMed

    Marine, Patrick M; Stabin, Michael G; Fernald, Michael J; Brill, Aaron B

    2010-05-01

    A systematic evaluation has been performed to study how specific absorbed fractions (SAFs) vary with changes in adult body size, for persons of different size but normal body stature. A review of the literature was performed to evaluate how individual organ sizes vary with changes in total body weight of normal-stature individuals. On the basis of this literature review, changes were made to our easily deformable reference adult male and female total-body models. Monte Carlo simulations of radiation transport were performed; SAFs for photons were generated for 10th, 25th, 75th, and 90th percentile adults; and comparisons were made to the reference (50th) percentile SAF values. Differences in SAFs for organs irradiating themselves were between 0.5% and 1.0%/kg difference in body weight, from 15% to 30% overall, for organs within the trunk. Differences in SAFs for organs outside the trunk were not greater than the uncertainties in the data and will not be important enough to change calculated doses. For organs irradiating other organs within the trunk, differences were significant, between 0.3% and 1.1%/kg, or about 8%-33% overall. The differences are interesting and can be used to estimate how different patients' dosimetry might vary from values reported in standard dose tables.

  8. Turbulent Flow and Large Surface Wave Events in the Marine Boundary Layers

    DTIC Science & Technology

    2013-08-22

    Nether-784 lands Academy of Arts and Sciences.785 35 Wyngaard, J. C., 2004: Toward numerical modeling in the Terra Incognita. J. Atmos. Sci.,786 61...surface roughness, vegetative canopies, wind waves and local orography all influence wind turbine performance to varying degrees. For exam- ple, the...teor crater, Bull. Amer. Meteorol. Soc., 89, 127–150. Wyngaard, J. C., 2004: Toward numerical modeling in the Terra Incognita, J. Atmos. Sci., 61

  9. Simulating the Gradually Deteriorating Performance of an RTG

    NASA Technical Reports Server (NTRS)

    Wood, Eric G.; Ewell, Richard C.; Patel, Jagdish; Hanks, David R.; Lozano, Juan A.; Snyder, G. Jeffrey; Noon, Larry

    2008-01-01

    Degra (now in version 3) is a computer program that simulates the performance of a radioisotope thermoelectric generator (RTG) over its lifetime. Degra is provided with a graphical user interface that is used to edit input parameters that describe the initial state of the RTG and the time-varying loads and environment to which it will be exposed. Performance is computed by modeling the flows of heat from the radioactive source and through the thermocouples, also allowing for losses, to determine the temperature drop across the thermocouples. This temperature drop is used to determine the open-circuit voltage, electrical resistance, and thermal conductance of the thermocouples. Output power can then be computed by relating the open-circuit voltage and the electrical resistance of the thermocouples to a specified time-varying load voltage. Degra accounts for the gradual deterioration of performance attributable primarily to decay of the radioactive source and secondarily to gradual deterioration of the thermoelectric material. To provide guidance to an RTG designer, given a minimum of input, Degra computes the dimensions, masses, and thermal conductances of important internal structures as well as the overall external dimensions and total mass.

  10. Dynamic Black-Level Correction and Artifact Flagging in the Kepler Data Pipeline

    NASA Technical Reports Server (NTRS)

    Clarke, B. D.; Kolodziejczak, J. J.; Caldwell, D. A.

    2013-01-01

    Instrument-induced artifacts in the raw Kepler pixel data include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, manifestations of drifting moiré pattern as locally correlated nonstationary noise and rolling bands in the images which find their way into the calibrated pixel time series and ultimately into the calibrated target flux time series. Using a combination of raw science pixel data, full frame images, reverse-clocked pixel data and ancillary temperature data the Keplerpipeline models and removes the FGS crosstalk artifacts by dynamically adjusting the black level correction. By examining the residuals to the model fits, the pipeline detects and flags spatial regions and time intervals of strong time-varying blacklevel (rolling bands ) on a per row per cadence basis. These flags are made available to downstream users of the data since the uncorrected rolling band artifacts could complicate processing or lead to misinterpretation of instrument behavior as stellar. This model fitting and artifact flagging is performed within the new stand-alone pipeline model called Dynablack. We discuss the implementation of Dynablack in the Kepler data pipeline and present results regarding the improvement in calibrated pixels and the expected improvement in cotrending performances as a result of including FGS corrections in the calibration. We also discuss the effectiveness of the rolling band flagging for downstream users and illustrate with some affected light curves.

  11. Geospace environment modeling 2008--2009 challenge: Dst index

    USGS Publications Warehouse

    Rastätter, L.; Kuznetsova, M.M.; Glocer, A.; Welling, D.; Meng, X.; Raeder, J.; Wittberger, M.; Jordanova, V.K.; Yu, Y.; Zaharia, S.; Weigel, R.S.; Sazykin, S.; Boynton, R.; Wei, H.; Eccles, V.; Horton, W.; Mays, M.L.; Gannon, J.

    2013-01-01

    This paper reports the metrics-based results of the Dst index part of the 2008–2009 GEM Metrics Challenge. The 2008–2009 GEM Metrics Challenge asked modelers to submit results for four geomagnetic storm events and five different types of observations that can be modeled by statistical, climatological or physics-based models of the magnetosphere-ionosphere system. We present the results of 30 model settings that were run at the Community Coordinated Modeling Center and at the institutions of various modelers for these events. To measure the performance of each of the models against the observations, we use comparisons of 1 hour averaged model data with the Dst index issued by the World Data Center for Geomagnetism, Kyoto, Japan, and direct comparison of 1 minute model data with the 1 minute Dst index calculated by the United States Geological Survey. The latter index can be used to calculate spectral variability of model outputs in comparison to the index. We find that model rankings vary widely by skill score used. None of the models consistently perform best for all events. We find that empirical models perform well in general. Magnetohydrodynamics-based models of the global magnetosphere with inner magnetosphere physics (ring current model) included and stand-alone ring current models with properly defined boundary conditions perform well and are able to match or surpass results from empirical models. Unlike in similar studies, the statistical models used in this study found their challenge in the weakest events rather than the strongest events.

  12. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  13. A unified framework for group independent component analysis for multi-subject fMRI data

    PubMed Central

    Guo, Ying; Pagnoni, Giuseppe

    2008-01-01

    Independent component analysis (ICA) is becoming increasingly popular for analyzing functional magnetic resonance imaging (fMRI) data. While ICA has been successfully applied to single-subject analysis, the extension of ICA to group inferences is not straightforward and remains an active topic of research. Current group ICA models, such as the GIFT (Calhoun et al., 2001) and tensor PICA (Beckmann and Smith, 2005), make different assumptions about the underlying structure of the group spatio-temporal processes and are thus estimated using algorithms tailored for the assumed structure, potentially leading to diverging results. To our knowledge, there are currently no methods for assessing the validity of different model structures in real fMRI data and selecting the most appropriate one among various choices. In this paper, we propose a unified framework for estimating and comparing group ICA models with varying spatio-temporal structures. We consider a class of group ICA models that can accommodate different group structures and include existing models, such as the GIFT and tensor PICA, as special cases. We propose a maximum likelihood (ML) approach with a modified Expectation-Maximization (EM) algorithm for the estimation of the proposed class of models. Likelihood ratio tests (LRT) are presented to compare between different group ICA models. The LRT can be used to perform model comparison and selection, to assess the goodness-of-fit of a model in a particular data set, and to test group differences in the fMRI signal time courses between subject subgroups. Simulation studies are conducted to evaluate the performance of the proposed method under varying structures of group spatio-temporal processes. We illustrate our group ICA method using data from an fMRI study that investigates changes in neural processing associated with the regular practice of Zen meditation. PMID:18650105

  14. Optimal control model predictions of system performance and attention allocation and their experimental validation in a display design study

    NASA Technical Reports Server (NTRS)

    Johannsen, G.; Govindaraj, T.

    1980-01-01

    The influence of different types of predictor displays in a longitudinal vertical takeoff and landing (VTOL) hover task is analyzed in a theoretical study. Several cases with differing amounts of predictive and rate information are compared. The optimal control model of the human operator is used to estimate human and system performance in terms of root-mean-square (rms) values and to compute optimized attention allocation. The only part of the model which is varied to predict these data is the observation matrix. Typical cases are selected for a subsequent experimental validation. The rms values as well as eye-movement data are recorded. The results agree favorably with those of the theoretical study in terms of relative differences. Better matching is achieved by revised model input data.

  15. Aerodynamic Characteristics of a 45 Degree Swept-wing Fighter-Airplane Model and Aerodynamic Loads on Adjacent Stores and Missiles at Mach Numbers of 1.57, 1.87, 2.16, and 2.53

    NASA Technical Reports Server (NTRS)

    Oehman, Waldo I; Turner, Kenneth L

    1958-01-01

    An investigation was performed in the Langley Unitary Plan wind tunnel to determine the aerodynamic characteristics of a model of a 450 swept-wing fighter airplane, and to determine the loads on attached stores and detached missiles in the presence of the model. Also included was a determination of aileron-spoiler effectiveness, aileron hinge moments, and the effects of wing modifications on model aerodynamic characteristics. Tests were performed at Mach numbers of 1.57, 1.87, 2.16, and 2.53. The Reynolds numbers for the tests, based on the mean aerodynamic chord of the wing, varied from about 0.9 x 10(exp 6) to 5 x 10(exp 6). The results are presented with minimum analysis.

  16. Seismic performance evaluation of RC frame-shear wall structures using nonlinear analysis methods

    NASA Astrophysics Data System (ADS)

    Shi, Jialiang; Wang, Qiuwei

    To further understand the seismic performance of reinforced concrete (RC) frame-shear wall structures, a 1/8 model structure is scaled from a main factory structure with seven stories and seven bays. The model with four-stories and two-bays was pseudo-dynamically tested under six earthquake actions whose peak ground accelerations (PGA) vary from 50gal to 400gal. The damage process and failure patterns were investigated. Furthermore, nonlinear dynamic analysis (NDA) and capacity spectrum method (CSM) were adopted to evaluate the seismic behavior of the model structure. The top displacement curve, story drift curve and distribution of hinges were obtained and discussed. It is shown that the model structure had the characteristics of beam-hinge failure mechanism. The two methods can be used to evaluate the seismic behavior of RC frame-shear wall structures well. What’s more, the NDA can be somewhat replaced by CSM for the seismic performance evaluation of RC structures.

  17. Experimental and numerical study of control of flow separation of a symmetric airfoil with trapped vortex cavity

    NASA Astrophysics Data System (ADS)

    Shahid, Abdullah Bin; Mashud, Mohammad

    2017-06-01

    This paper summarizes the experimental campaign and numerical analysis performed aimed to analyze the potential benefit available employing a trapping vortex cell system on a high thickness symmetric aero-foil without steady suction or injection mass flow. In this work, the behavior of a two dimensional model equipped with a span wise adjusted circular cavity has been researched. Pressure distribution on the model surface and inside and the complete flow field round the model have been measured. Experimental tests have been performed varying the wind tunnel speed and also the angle of attack. For numerical analysis the two dimensional model of the airfoil and the mesh is formed through ANSYS Meshing that is run in Fluent for numerical iterate solution. In the paper the performed test campaign, the airfoil design, the adopted experimental set-up, the numerical analysis, the data post process and the results description are reported, compared a discussed.

  18. Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale

    DOE PAGES

    Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.; ...

    2017-01-26

    Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less

  19. Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.

    Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less

  20. Optimizing landslide susceptibility zonation: Effects of DEM spatial resolution and slope unit delineation on logistic regression models

    NASA Astrophysics Data System (ADS)

    Schlögel, R.; Marchesini, I.; Alvioli, M.; Reichenbach, P.; Rossi, M.; Malet, J.-P.

    2018-01-01

    We perform landslide susceptibility zonation with slope units using three digital elevation models (DEMs) of varying spatial resolution of the Ubaye Valley (South French Alps). In so doing, we applied a recently developed algorithm automating slope unit delineation, given a number of parameters, in order to optimize simultaneously the partitioning of the terrain and the performance of a logistic regression susceptibility model. The method allowed us to obtain optimal slope units for each available DEM spatial resolution. For each resolution, we studied the susceptibility model performance by analyzing in detail the relevance of the conditioning variables. The analysis is based on landslide morphology data, considering either the whole landslide or only the source area outline as inputs. The procedure allowed us to select the most useful information, in terms of DEM spatial resolution, thematic variables and landslide inventory, in order to obtain the most reliable slope unit-based landslide susceptibility assessment.

  1. A comparison of WEC control strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, David G.; Bacelli, Giorgio; Coe, Ryan Geoffrey

    2016-04-01

    The operation of Wave Energy Converter (WEC) devices can pose many challenging problems to the Water Power Community. A key research question is how to significantly improve the performance of these WEC devices through improving the control system design. This report summarizes an effort to analyze and improve the performance of WEC through the design and implementation of control systems. Controllers were selected to span the WEC control design space with the aim of building a more comprehensive understanding of different controller capabilities and requirements. To design and evaluate these control strategies, a model scale test-bed WEC was designed formore » both numerical and experimental testing (see Section 1.1). Seven control strategies have been developed and applied on a numerical model of the selected WEC. This model is capable of performing at a range of levels, spanning from a fully-linear realization to varying levels of nonlinearity. The details of this model and its ongoing development are described in Section 1.2.« less

  2. Metrological analysis of a virtual flowmeter-based transducer for cryogenic helium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arpaia, P., E-mail: pasquale.arpaia@unina.it; Technology Department, European Organization for Nuclear Research; Girone, M., E-mail: mario.girone@cern.ch

    2015-12-15

    The metrological performance of a virtual flowmeter-based transducer for monitoring helium under cryogenic conditions is assessed. At this aim, an uncertainty model of the transducer, mainly based on a valve model, exploiting finite-element approach, and a virtual flowmeter model, based on the Sereg-Schlumberger method, are presented. The models are validated experimentally on a case study for helium monitoring in cryogenic systems at the European Organization for Nuclear Research (CERN). The impact of uncertainty sources on the transducer metrological performance is assessed by a sensitivity analysis, based on statistical experiment design and analysis of variance. In this way, the uncertainty sourcesmore » most influencing metrological performance of the transducer are singled out over the input range as a whole, at varying operating and setting conditions. This analysis turns out to be important for CERN cryogenics operation because the metrological design of the transducer is validated, and its components and working conditions with critical specifications for future improvements are identified.« less

  3. Queuing Models of Tertiary Storage

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore

    1996-01-01

    Large scale scientific projects generate and use large amounts of data. For example, the NASA Earth Observation System Data and Information System (EOSDIS) project is expected to archive one petabyte per year of raw satellite data. This data is made automatically available for processing into higher level data products and for dissemination to the scientific community. Such large volumes of data can only be stored in robotic storage libraries (RSL's) for near-line access. A characteristic of RSL's is the use of a robot arm that transfers media between a storage rack and the read/write drives, thus multiplying the capacity of the system. The performance of the RSL's can be a critical limiting factor for the performance of the archive system. However, the many interacting components of an RSL make a performance analysis difficult. In addition, different RSL components can have widely varying performance characteristics. This paper describes our work to develop performance models of an RSL in isolation. Next we show how the RSL model can be incorporated into a queuing network model. We use the models to make some example performance studies of archive systems. The models described in this paper, developed for the NASA EODIS project, are implemented in C with a well defined interface. The source code, accompanying documentation, and also sample JAVA applets are available at: http://www.cis.ufl.edu/ted/

  4. Using color histogram normalization for recovering chromatic illumination-changed images.

    PubMed

    Pei, S C; Tseng, C L; Wu, C C

    2001-11-01

    We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.

  5. Soil water content evaluation considering time-invariant spatial pattern and space-variant temporal change

    NASA Astrophysics Data System (ADS)

    Hu, W.; Si, B. C.

    2013-10-01

    Soil water content (SWC) varies in space and time. The objective of this study was to evaluate soil water content distribution using a statistical model. The model divides spatial SWC series into time-invariant spatial patterns, space-invariant temporal changes, and space- and time-dependent redistribution terms. The redistribution term is responsible for the temporal changes in spatial patterns of SWC. An empirical orthogonal function was used to separate the total variations of redistribution terms into the sum of the product of spatial structures (EOFs) and temporally-varying coefficients (ECs). Model performance was evaluated using SWC data of near-surface (0-0.2 m) and root-zone (0-1.0 m) from a Canadian Prairie landscape. Three significant EOFs were identified for redistribution term for both soil layers. EOF1 dominated the variations of redistribution terms and it resulted in more changes (recharge or discharge) in SWC at wetter locations. Depth to CaCO3 layer and organic carbon were the two most important controlling factors of EOF1, and together, they explained over 80% of the variations in EOF1. Weak correlation existed between either EOF2 or EOF3 and the observed factors. A reasonable prediction of SWC distribution was obtained with this model using cross validation. The model performed better in the root zone than in the near surface, and it outperformed conventional EOF method in case soil moisture deviated from the average conditions.

  6. A Novel Wind Speed Forecasting Model for Wind Farms of Northwest China

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Zhou; Wang, Yun

    2017-01-01

    Wind resources are becoming increasingly significant due to their clean and renewable characteristics, and the integration of wind power into existing electricity systems is imminent. To maintain a stable power supply system that takes into account the stochastic nature of wind speed, accurate wind speed forecasting is pivotal. However, no single model can be applied to all cases. Recent studies show that wind speed forecasting errors are approximately 25% to 40% in Chinese wind farms. Presently, hybrid wind speed forecasting models are widely used and have been verified to perform better than conventional single forecasting models, not only in short-term wind speed forecasting but also in long-term forecasting. In this paper, a hybrid forecasting model is developed, the Similar Coefficient Sum (SCS) and Hermite Interpolation are exploited to process the original wind speed data, and the SVM model whose parameters are tuned by an artificial intelligence model is built to make forecast. The results of case studies show that the MAPE value of the hybrid model varies from 22.96% to 28.87 %, and the MAE value varies from 0.47 m/s to 1.30 m/s. Generally, Sign test, Wilcoxon's Signed-Rank test, and Morgan-Granger-Newbold test tell us that the proposed model is different from the compared models.

  7. An in vitro experimental study of flow past aortic valve under varied pulsatile conditions

    NASA Astrophysics Data System (ADS)

    Zhang, Ruihang; Zhang, Yan

    2017-11-01

    Flow past aortic valve represents a complex fluid-structure interaction phenomenon that involves pulsatile, vortical, and turbulent conditions. The flow characteristics immediately downstream of the valve, such as the variation of pulsatile flow velocity, formation of vortices, distribution of shear stresses, are of particular interest to further elucidate the role of hemodynamics in various aortic diseases. However, the fluid dynamics of a realistic aortic valve is not fully understood. Particularly, it is unclear how the flow fields downstream of the aortic valve would change under varied pulsatile inlet boundary conditions. In this study, an in vitro experiment has been conducted to investigate the flow fields downstream of a silicone aortic valve model within a cardiovascular flow simulator. Phased-locked Particle Image Velocimetry measurements were performed to map the velocity fields and Reynolds normal and shear stresses at different phases in a cardiac cycle. Temporal variations of pressure across the valve model were measured using high frequency transducers. Results have been compared for different pulsatile inlet conditions, including varied frequencies (heart rates), magnitudes (stroke volumes), and cardiac contractile functions (shapes of waveforms).

  8. Shuttle operations simulation model programmers'/users' manual

    NASA Technical Reports Server (NTRS)

    Porter, D. G.

    1972-01-01

    The prospective user of the shuttle operations simulation (SOS) model is given sufficient information to enable him to perform simulation studies of the space shuttle launch-to-launch operations cycle. The procedures used for modifying the SOS model to meet user requirements are described. The various control card sequences required to execute the SOS model are given. The report is written for users with varying computer simulation experience. A description of the components of the SOS model is included that presents both an explanation of the logic involved in the simulation of the shuttle operations cycle and a description of the routines used to support the actual simulation.

  9. Final Project Report CFA-14-6357: A New Paradigm for Understanding Multiphase Ceramic Waste Form Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brinkman, Kyle; Bordia, Rajendra; Reifsnider, Kenneth

    This project fabricated model multiphase ceramic waste forms with processing-controlled microstructures followed by advanced characterization with synchrotron and electron microscopy-based 3D tomography to provide elemental and chemical state-specific information resulting in compositional phase maps of ceramic composites. Details of 3D microstructural features were incorporated into computer-based simulations using durability data for individual constituent phases as inputs in order to predict the performance of multiphase waste forms with varying microstructure and phase connectivity.

  10. Added value in health care with six sigma.

    PubMed

    Lenaz, Maria P

    2004-06-01

    Six sigma is the structured application of the tools and techniques of quality management applied on a project basis that can enable organizations to achieve superior performance and strategic business results. The Greek character sigma has been used as a statistical term that measures how much a process varies from perfection, based on the number of defects per million units. Health care organizations using this model proceed from the lower levels of quality performance to the highest level, in which the process is nearly error free.

  11. MRAC Control with Prior Model Knowledge for Asymmetric Damaged Aircraft

    PubMed Central

    Zhang, Jing

    2015-01-01

    This paper develops a novel state-tracking multivariable model reference adaptive control (MRAC) technique utilizing prior knowledge of plant models to recover control performance of an asymmetric structural damaged aircraft. A modification of linear model representation is given. With prior knowledge on structural damage, a polytope linear parameter varying (LPV) model is derived to cover all concerned damage conditions. An MRAC method is developed for the polytope model, of which the stability and asymptotic error convergence are theoretically proved. The proposed technique reduces the number of parameters to be adapted and thus decreases computational cost and requires less input information. The method is validated by simulations on NASA generic transport model (GTM) with damage. PMID:26180839

  12. A sediment graph model based on SCS-CN method

    NASA Astrophysics Data System (ADS)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  13. Modeling variability in porescale multiphase flow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ling, Bowen; Bao, Jie; Oostrom, Mart

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulationsmore » are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.« less

  14. Modeling road-cycling performance.

    PubMed

    Olds, T S; Norton, K I; Lowe, E L; Olive, S; Reay, F; Ly, S

    1995-04-01

    This paper presents a complete set of equations for a "first principles" mathematical model of road-cycling performance, including corrections for the effect of winds, tire pressure and wheel radius, altitude, relative humidity, rotational kinetic energy, drafting, and changed drag. The relevant physiological, biophysical, and environmental variables were measured in 41 experienced cyclists completing a 26-km road time trial. The correlation between actual and predicted times was 0.89 (P < or = 0.0001), with a mean difference of 0.74 min (1.73% of mean performance time) and a mean absolute difference of 1.65 min (3.87%). Multiple simulations were performed where model inputs were randomly varied using a normal distribution about the measured values with a SD equivalent to the estimated day-to-day variability or technical error of measurement in each of the inputs. This analysis yielded 95% confidence limits for the predicted times. The model suggests that the main physiological factors contributing to road-cycling performance are maximal O2 consumption, fractional utilization of maximal O2 consumption, mechanical efficiency, and projected frontal area. The model is then applied to some practical problems in road cycling: the effect of drafting, the advantage of using smaller front wheels, the effects of added mass, the importance of rotational kinetic energy, the effect of changes in drag due to changes in bicycle configuration, the normalization of performances under different conditions, and the limits of human performance.

  15. Validation of a national hydrological model

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Booker, D. J.; Cattoën, C.

    2016-10-01

    Nationwide predictions of flow time-series are valuable for development of policies relating to environmental flows, calculating reliability of supply to water users, or assessing risk of floods or droughts. This breadth of model utility is possible because various hydrological signatures can be derived from simulated flow time-series. However, producing national hydrological simulations can be challenging due to strong environmental diversity across catchments and a lack of data available to aid model parameterisation. A comprehensive and consistent suite of test procedures to quantify spatial and temporal patterns in performance across various parts of the hydrograph is described and applied to quantify the performance of an uncalibrated national rainfall-runoff model of New Zealand. Flow time-series observed at 485 gauging stations were used to calculate Nash-Sutcliffe efficiency and percent bias when simulating between-site differences in daily series, between-year differences in annual series, and between-site differences in hydrological signatures. The procedures were used to assess the benefit of applying a correction to the modelled flow duration curve based on an independent statistical analysis. They were used to aid understanding of climatological, hydrological and model-based causes of differences in predictive performance by assessing multiple hypotheses that describe where and when the model was expected to perform best. As the procedures produce quantitative measures of performance, they provide an objective basis for model assessment that could be applied when comparing observed daily flow series with competing simulated flow series from any region-wide or nationwide hydrological model. Model performance varied in space and time with better scores in larger and medium-wet catchments, and in catchments with smaller seasonal variations. Surprisingly, model performance was not sensitive to aquifer fraction or rain gauge density.

  16. Retrospective Attention Interacts with Stimulus Strength to Shape Working Memory Performance.

    PubMed

    Wildegger, Theresa; Humphreys, Glyn; Nobre, Anna C

    2016-01-01

    Orienting attention retrospectively to selective contents in working memory (WM) influences performance. A separate line of research has shown that stimulus strength shapes perceptual representations. There is little research on how stimulus strength during encoding shapes WM performance, and how effects of retrospective orienting might vary with changes in stimulus strength. We explore these questions in three experiments using a continuous-recall WM task. In Experiment 1 we show that benefits of cueing spatial attention retrospectively during WM maintenance (retrocueing) varies according to stimulus contrast during encoding. Retrocueing effects emerge for supraliminal but not sub-threshold stimuli. However, once stimuli are supraliminal, performance is no longer influenced by stimulus contrast. In Experiments 2 and 3 we used a mixture-model approach to examine how different sources of error in WM are affected by contrast and retrocueing. For high-contrast stimuli (Experiment 2), retrocues increased the precision of successfully remembered items. For low-contrast stimuli (Experiment 3), retrocues decreased the probability of mistaking a target with distracters. These results suggest that the processes by which retrospective attentional orienting shape WM performance are dependent on the quality of WM representations, which in turn depends on stimulus strength during encoding.

  17. TRISO-fuel element thermo-mechanical performance modeling for the hybrid LIFE engine with Pu fuel blanket

    NASA Astrophysics Data System (ADS)

    DeMange, P.; Marian, J.; Caro, M.; Caro, A.

    2010-10-01

    A TRISO-coated fuel thermo-mechanical performance study is performed for the fusion-fission hybrid Laser Inertial Fusion Engine (LIFE) to test the viability of TRISO particles to achieve ultra-high burn-up of Pu or transuranic spent nuclear fuel blankets. Our methodology includes full elastic anisotropy, time and temperature varying material properties, and multilayer capabilities. In order to achieve fast fluences up to 30 × 10 25 n m -2 ( E > 0.18 MeV), judicious extrapolations across several orders of magnitude of existing material databases have been carried out. The results of our study indicate that failure of the pyrolytic carbon (PyC) layers occurs within the first 2 years of operation. The particles then behave as a single-SiC-layer particle and the SiC layer maintains reasonably-low tensile stresses until the end-of-life. It is also found that the PyC creep constant, K, has a striking influence on the fuel performance of TRISO-coated particles, whose stresses scale almost inversely proportional to K. Conversely, varying the geometry of the TRISO-coated fuel particles results in little differences in terms of fuel performance.

  18. Forecasting Daily Volume and Acuity of Patients in the Emergency Department.

    PubMed

    Calegari, Rafael; Fogliatto, Flavio S; Lucini, Filipe R; Neyeloff, Jeruza; Kuchenbecker, Ricardo S; Schaan, Beatriz D

    2016-01-01

    This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification.

  19. Forecasting Daily Volume and Acuity of Patients in the Emergency Department

    PubMed Central

    Fogliatto, Flavio S.; Neyeloff, Jeruza; Kuchenbecker, Ricardo S.; Schaan, Beatriz D.

    2016-01-01

    This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification. PMID:27725842

  20. Prioritizing public- private partnership models for public hospitals of iran based on performance indicators.

    PubMed

    Gholamzadeh Nikjoo, Raana; Jabbari Beyrami, Hossein; Jannati, Ali; Asghari Jaafarabadi, Mohammad

    2012-01-01

    The present study was conducted to scrutinize Public- Private Partnership (PPP) models in public hospitals of different countries based on performance indicators in order to se-lect appropriated models for Iran hospitals. In this mixed (quantitative-qualitative) study, systematic review and expert panel has been done to identify varied models of PPP as well as performance indicators. In the second step we prioritized performance indicator and PPP models based on selected performance indicators by Analytical Hierarchy process (AHP) technique. The data were analyzed by Excel 2007 and Expert Choice11 software's. In quality - effectiveness area, indicators like the rate of hospital infections (100%), hospital accidents prevalence rate (73%), pure rate of hospital mortality (63%), patient satisfaction percentage (53%), in accessibility equity area indicators such as average inpatient waiting time (100%) and average outpatient waiting time (74%), and in financial - efficiency area, indicators including average length of stay (100%), bed occupation ratio (99%), specific income to total cost ratio (97%) have been chosen to be the most key performance indicators. In the pri¬oritization of the PPP models clinical outsourcing, management, privatization, BOO (build, own, operate) and non-clinical outsourcing models, achieved high priority for various performance in¬dicator areas. This study had been provided the most common PPP options in the field of public hospitals and had gathered suitable evidences from experts for choosing appropriate PPP option for public hospitals. Effect of private sector presence in public hospital performance, based on which PPP options undertaken, will be different.

  1. Prioritizing Public- Private Partnership Models for Public Hospitals of Iran Based on Performance Indicators

    PubMed Central

    Gholamzadeh Nikjoo, Raana; Jabbari Beyrami, Hossein; Jannati, Ali; Asghari Jaafarabadi, Mohammad

    2012-01-01

    Background: The present study was conducted to scrutinize Public- Private Partnership (PPP) models in public hospitals of different countries based on performance indicators in order to se-lect appropriated models for Iran hospitals. Methods: In this mixed (quantitative-qualitative) study, systematic review and expert panel has been done to identify varied models of PPP as well as performance indicators. In the second step we prioritized performance indicator and PPP models based on selected performance indicators by Analytical Hierarchy process (AHP) technique. The data were analyzed by Excel 2007 and Expert Choice11 software’s. Results: In quality – effectiveness area, indicators like the rate of hospital infections (100%), hospital accidents prevalence rate (73%), pure rate of hospital mortality (63%), patient satisfaction percentage (53%), in accessibility equity area indicators such as average inpatient waiting time (100%) and average outpatient waiting time (74%), and in financial – efficiency area, indicators including average length of stay (100%), bed occupation ratio (99%), specific income to total cost ratio (97%) have been chosen to be the most key performance indicators. In the pri¬oritization of the PPP models clinical outsourcing, management, privatization, BOO (build, own, operate) and non-clinical outsourcing models, achieved high priority for various performance in¬dicator areas. Conclusion: This study had been provided the most common PPP options in the field of public hospitals and had gathered suitable evidences from experts for choosing appropriate PPP option for public hospitals. Effect of private sector presence in public hospital performance, based on which PPP options undertaken, will be different. PMID:24688942

  2. The Nature of Pre-Service Science Teachers' Argumentation in Inquiry-Oriented Laboratory Context

    ERIC Educational Resources Information Center

    Ozdem, Yasemin; Ertepinar, Hamide; Cakiroglu, Jale; Erduran, Sibel

    2013-01-01

    The aim of this study was to investigate the kinds of argumentation schemes generated by pre-service elementary science teachers (PSTs) as they perform inquiry-oriented laboratory tasks, and to explore how argumentation schemes vary by task as well as by experimentation and discussion sessions. The model of argumentative and scientific inquiry was…

  3. Simulating the role of visual selective attention during the development of perceptual completion

    PubMed Central

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.

    2014-01-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds’ performance on a second measure, the perceptual unity task. Two parameters in the model – corresponding to areas in the occipital and parietal cortices – were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. PMID:23106728

  4. Simulating the role of visual selective attention during the development of perceptual completion.

    PubMed

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P

    2012-11-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds' performance on a second measure, the perceptual unity task. Two parameters in the model - corresponding to areas in the occipital and parietal cortices - were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. © 2012 Blackwell Publishing Ltd.

  5. Practical application of economic well-performance criteria to the optimization of fracturing treatment design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, R.W.; Phillips, A.M.

    1988-02-01

    Low-permeability reservoirs are currently being propped with sand, resin-coated sand, intermediate-density proppants, and bauxite. This wide range of proppant cost and performance has resulted in a proliferation of proppant selection models. Initially, a rather vague relationship between well depth and proppant strength dictated the choice of proppant. More recently, computerized models of varying complexity have become available that use net-present-value (NPV) calculations. The input is based on the operator's performance goals for each well and on specific reservoir properties. Simpler, noncomputerized approaches also being used include cost/performance comparisons and nomographs. Each type of model, including several of the computerized models,more » will be examined. By use of these models and NPV calculations, optimum fracturing treatment designs have been developed for such low-permeability reservoirs as the Prue in Oklahoma. Typical well conditions are used in each of the selection models and the results are compared. The computerized models allow the operator to determine, before fracturing, how changes in proppant type, size, and quantity will affect postfracture production over time periods ranging from several months to many years. Thus, the operator can choose the fracturing treatment design that best satisfies the economic performance goals for a particular well, regardless of whether those goals are long or short term.« less

  6. Due process model of procedural justice in performance appraisal: promotion versus termination scenarios.

    PubMed

    Kataoka, Heloneida C; Cole, Nina D; Flint, Douglas A

    2006-12-01

    In a laboratory study, 318 student participants (148 male, 169 female, and one who did not report sex; M age 25.0, SD = 6.0) in introductory organizational behavior classes responded to scenarios in which performance appraisal resulted in either employee promotion or termination. Each scenario had varying levels of three procedural justice criteria for performance appraisal. For both promotion and termination outcomes, analysis showed that, as the number of criteria increased, perceptions of procedural fairness increased. A comparison between the two outcomes showed that perceptions of fairness were significantly stronger for the promotion outcome than for termination.

  7. Sailplane Glide Performance and Control Using Fixed and Articulating Winglets. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Colling, James David

    1995-01-01

    An experimental study was conducted to investigate the effects of controllable articulating winglets on glide performance and yawing moments of high performance sailplanes. Testing was conducted in the Texas A&M University 7 x 10 foot Low Speed Wind Tunnel using a full-scale model of the outboard 5.6 feet of a 15 meter class high performance sailplane wing. Different wing tip configurations could be easily mounted to the wing model. A winglet was designed in which the cant and toe angles as well as a rudder on the winglet could be adjusted to a range of positions. Cant angles used in the investigation consisted of 5, 25, and 40 degrees measured from the vertical axis. Toe-out angles ranged from 0 to 22.5 degrees. A rudder on the winglet was used to study the effects of changing the camber of the winglet airfoil on wing performance and wing yawing moments. Rudder deflections consisted of-10, 0, and 10 degrees. Test results for a fixed geometry winglet and a standard wing tip are presented to show the general behavior of winglets on sailplane wings, and the effects of boundary-layer turbulators on the winglets are also presented. By tripping the laminar boundary-layer to turbulent before laminar separation occurs, the wing performance was increased at low Reynolds numbers. The effects on the lift and drag, yawing moment, pitching moment, and wing root bending moment of the model are presented. Oil flows were used on the wing model with the fixed geometry winglet and the standard wing tip to visualize flow directions and areas of boundary layer transition. A cant angle of 25 degrees and a toe-out angle of 2.5 degrees provided an optimal increase in wing performance for the cant and toe angles tested. Maximum performance was obtained when the winglet rudder remained in the neutral position of zero degrees. By varying the cant, toe, and rudder angles from their optimized positions, wing performance decreases. Although the winglet rudder proved to be more effective in increasing the yawing moment compared to varying the cant and toe angles, the amount of increased yawing moment was insignificant when compared to that produced by the vertical tail. A rudder on the winglet was determined to be ineffective for providing additional yaw control.

  8. Physiologically based pharmacokinetic modeling of PLGA nanoparticles with varied mPEG content

    PubMed Central

    Li, Mingguang; Panagi, Zoi; Avgoustakis, Konstantinos; Reineke, Joshua

    2012-01-01

    Biodistribution of nanoparticles is dependent on their physicochemical properties (such as size, surface charge, and surface hydrophilicity). Clear and systematic understanding of nanoparticle properties’ effects on their in vivo performance is of fundamental significance in nanoparticle design, development and optimization for medical applications, and toxicity evaluation. In the present study, a physiologically based pharmacokinetic model was utilized to interpret the effects of nanoparticle properties on previously published biodistribution data. Biodistribution data for five poly(lactic-co-glycolic) acid (PLGA) nanoparticle formulations prepared with varied content of monomethoxypoly (ethyleneglycol) (mPEG) (PLGA, PLGA-mPEG256, PLGA-mPEG153, PLGA-mPEG51, PLGA-mPEG34) were collected in mice after intravenous injection. A physiologically based pharmacokinetic model was developed and evaluated to simulate the mass-time profiles of nanoparticle distribution in tissues. In anticipation that the biodistribution of new nanoparticle formulations could be predicted from the physiologically based pharmacokinetic model, multivariate regression analysis was performed to build the relationship between nanoparticle properties (size, zeta potential, and number of PEG molecules per unit surface area) and biodistribution parameters. Based on these relationships, characterized physicochemical properties of PLGA-mPEG495 nanoparticles (a sixth formulation) were used to calculate (predict) biodistribution profiles. For all five initial formulations, the developed model adequately simulates the experimental data indicating that the model is suitable for description of PLGA-mPEG nanoparticle biodistribution. Further, the predicted biodistribution profiles of PLGA-mPEG495 were close to experimental data, reflecting properly developed property–biodistribution relationships. PMID:22419876

  9. A non-invasive method to produce pressure ulcers of varying severity in a spinal cord-injured rat model.

    PubMed

    Ahmed, A K; Goodwin, C R; Sarabia-Estrada, R; Lay, F; Ansari, A M; Steenbergen, C; Pang, C; Cohen, R; Born, L J; Matsangos, A E; Ng, C; Marti, G P; Abu-Bonsrah, N; Phillips, N A; Suk, I; Sciubba, D M; Harmon, J W

    2016-12-01

    Experimental study. The objective of this study was to establish a non-invasive model to produce pressure ulcers of varying severity in animals with spinal cord injury (SCI). The study was conducted at the Johns Hopkins Hospital in Baltimore, Maryland, USA. A mid-thoracic (T7-T9) left hemisection was performed on Sprague-Dawley rats. At 7 days post SCI, rats received varying degrees of pressure on the left posterior thigh region. Laser Doppler Flowmetry was used to record blood flow. Animals were killed 12 days after SCI. A cardiac puncture was performed for blood chemistry, and full-thickness tissue was harvested for histology. Doppler blood flow after SCI prior to pressure application was 237.808±16.175 PFUs at day 7. Following pressure application, there was a statistically significant decrease in blood flow in all pressure-applied groups in comparison with controls with a mean perfusion of 118.361±18.223 (P<0.001). White blood cell counts and creatine kinase for each group were statistically significant from the control group (P=0.0107 and P=0.0028, respectively). We have created a novel animal model of pressure ulcer formation in the setting of a SCI. Histological analysis revealed different stages of injury corresponding to the amount of pressure the animals were exposed to with decreased blood flow immediately after the insult along with a subsequent marked increase in blood flow the next day, conducive to an ischemia-reperfusion injury (IRI) and a possible inflammatory response following tissue injury. Following ischemia and hypoxia secondary to microcirculation impairment, free radicals generate lipid peroxidation, leading to ischemic tissue damage. Future studies should be aimed at measuring free radicals during this period of increased blood flow, following tissue ischemia.

  10. A Large Catalog of Multiwavelength GRB Afterglows. I. Color Evolution and Its Physical Implication

    NASA Astrophysics Data System (ADS)

    Li, Liang; Wang, Yu; Shao, Lang; Wu, Xue-Feng; Huang, Yong-Feng; Zhang, Bing; Ryde, Felix; Yu, Hoi-Fung

    2018-02-01

    The spectrum of gamma-ray burst (GRB) afterglows can be studied with color indices. Here, we present a large comprehensive catalog of 70 GRBs with multiwavelength optical transient data on which we perform a systematic study to find the temporal evolution of color indices. We categorize them into two samples based on how well the color indices are evaluated. The Golden sample includes 25 bursts mostly observed by GROND, and the Silver sample includes 45 bursts observed by other telescopes. For the Golden sample, we find that 96% of the color indices do not vary over time. However, the color indices do vary during short periods in most bursts. The observed variations are consistent with effects of (i) the cooling frequency crossing the studied energy bands in a wind medium (43%) and in a constant-density medium (30%), (ii) early dust extinction (12%), (iii) transition from reverse-shock to forward-shock emission (5%), or (iv) an emergent SN emission (10%). We also study the evolutionary properties of the mean color indices for different emission episodes. We find that 86% of the color indices in the 70 bursts show constancy between consecutive ones. The color index variations occur mainly during the late GRB–SN bump, the flare, and early reverse-shock emission components. We further perform a statistical analysis of various observational properties and model parameters (spectral index {β }o{CI}, electron spectral indices p CI, etc.) using color indices. Overall, we conclude that ∼90% of colors are constant in time and can be accounted for by the simplest external forward-shock model, while the varying color indices call for more detailed modeling.

  11. WSN system design by using an innovative neural network model to perform thermals forecasting in a urban canyon scenario

    NASA Astrophysics Data System (ADS)

    Giuseppina, Nicolosi; Salvatore, Tirrito

    2015-12-01

    Wireless Sensor Networks (WSNs) were studied by researchers in order to manage Heating, Ventilating and Air-Conditioning (HVAC) indoor systems. WSN can be useful specially to regulate indoor confort in a urban canyon scenario, where the thermal parameters vary rapidly, influenced by outdoor climate changing. This paper shows an innovative neural network approach, by using WSN data collected, in order to forecast the indoor temperature to varying the outdoor conditions based on climate parameters and boundary conditions typically of urban canyon. In this work more attention will be done to influence of traffic jam and number of vehicles in queue.

  12. Infrared Radiography: Modeling X-ray Imaging Without Harmful Radiation

    NASA Astrophysics Data System (ADS)

    Zietz, Otto; Mylott, Elliot; Widenhorn, Ralf

    2015-01-01

    Planar x-ray imaging is a ubiquitous diagnostic tool and is routinely performed to diagnose conditions as varied as bone fractures and pneumonia. The underlying principle is that the varying attenuation coefficients of air, water, tissue, bone, or metal implants within the body result in non-uniform transmission of x-ray radiation. Through the detection of transmitted radiation, the spatial organization and composition of materials in the body can be ascertained. In this paper, we describe an original apparatus that teaches these concepts by utilizing near infrared radiation and an up-converting phosphorescent screen to safely probe the contents of an opaque enclosure.

  13. The centrifuge facility - A life sciences research laboratory for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Fuller, Charles A.; Johnson, Catherine C.; Hargens, Alan R.

    1991-01-01

    The paper describes the centrifugal facility that is presently being developed by NASA for studies aboard the Space Station Freedom on the role of gravity, or its absence, at varying intensities for varying periods of time and with multiple model systems. Special attention is given to the design of the centrifuge system, the habitats designed to hold plants and animals, the glovebox system designed for experimental manipulations of the specimens, and the service unit. Studies planned for the facility will include experiments in the following disciplines: cell and developmental biology, plant biology, regulatory physiology, musculoskeletal physiology, behavior and performance, neurosciences, cardiopulmonary physiology, and environmental health and radiation.

  14. Time-varying q-deformed dark energy interacts with dark matter

    NASA Astrophysics Data System (ADS)

    Dil, Emre; Kolay, Erdinç

    We propose a new model for studying the dark constituents of the universe by regarding the dark energy as a q-deformed scalar field interacting with the dark matter, in the framework of standard general relativity. Here we assume that the number of particles in each mode of the q-deformed scalar field varies in time by the particle creation and annihilation. We first describe the q-deformed scalar field dark energy quantum-field theoretically, then construct the action and the dynamical structure of these interacting dark sectors, in order to study the dynamics of the model. We perform the phase space analysis of the model to confirm and interpret our proposal by searching the stable attractor solutions implying the late-time accelerating phase of the universe. We then obtain the result that when interaction and equation-of-state parameter of the dark matter evolve from the present day values into a particular value, the dark energy turns out to be a q-deformed scalar field.

  15. Can biophysical properties of submersed macrophytes be determined by remote sensing?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malthus, T.J.; Ciraolo, G.; La Loggia, G.

    1997-06-01

    This paper details the development of a computationally efficient Monte Carlo simulation program to model photon transport through submersed plant canopies, with emphasis on Seagrass communities. The model incorporates three components: the transmission of photons through a water column of varying depth and turbidity; the interaction of photons within a submersed plant canopy of varying biomass; and interactions with the bottom substrate. The three components of the model are discussed. Simulations were performed based on measured parameters for Posidonia oceanica and compared to measured subsurface reflectance spectra made over comparable seagrass communities in Sicilian coastal waters. It is shown thatmore » the output is realistic. Further simulations are undertaken to investigate the effect of depth and turbidity of the overlying water column. Both sets of results indicate the rapid loss of canopy signal as depth increases and water column phytoplankton concentrations increase. The implications for the development of algorithms for the estimation of submersed canopy biophysical parameters are briefly discussed.« less

  16. Computational Modeling and Experimental Validation of Shock Induced Damage in Woven E-Glass/Vinylester Laminates

    NASA Astrophysics Data System (ADS)

    Hufner, D. R.; Augustine, M. R.

    2018-05-01

    A novel experimental method was developed to simulate underwater explosion pressure pulses within a laboratory environment. An impact-based experimental apparatus was constructed; capable of generating pressure pulses with basic character similar to underwater explosions, while also allowing the pulse to be tuned to different intensities. Having the capability to vary the shock impulse was considered essential to producing various levels of shock-induced damage without the need to modify the fixture. The experimental apparatus and test method are considered ideal for investigating the shock response of composite material systems and/or experimental validation of new material models. One such test program is presented herein, in which a series of E-glass/Vinylester laminates were subjected to a range of shock pulses that induced varying degrees of damage. Analysis-test correlations were performed using a rate-dependent constitutive model capable of representing anisotropic damage and ultimate yarn failure. Agreement between analytical predictions and experimental results was considered acceptable.

  17. Numerical simulation of two-dimensional flow over a heated carbon surface with coupled heterogeneous and homogeneous reactions

    NASA Astrophysics Data System (ADS)

    Johnson, Ryan Federick; Chelliah, Harsha Kumar

    2017-01-01

    For a range of flow and chemical timescales, numerical simulations of two-dimensional laminar flow over a reacting carbon surface were performed to understand further the complex coupling between heterogeneous and homogeneous reactions. An open-source computational package (OpenFOAM®) was used with previously developed lumped heterogeneous reaction models for carbon surfaces and a detailed homogeneous reaction model for CO oxidation. The influence of finite-rate chemical kinetics was explored by varying the surface temperatures from 1800 to 2600 K, while flow residence time effects were explored by varying the free-stream velocity up to 50 m/s. The reacting boundary layer structure dependence on the residence time was analysed by extracting the ratio of chemical source and species diffusion terms. The important contributions of radical species reactions on overall carbon removal rate, which is often neglected in multi-dimensional simulations, are highlighted. The results provide a framework for future development and validation of lumped heterogeneous reaction models based on multi-dimensional reacting flow configurations.

  18. Application of local indentations for film cooling of gas turbine blade leading edge

    NASA Astrophysics Data System (ADS)

    Petelchyts, V. Yu.; Khalatov, A. A.; Pysmennyi, D. N.; Dashevskyy, Yu. Ya.

    2016-09-01

    The paper presents results of computer simulation of the film cooling on the turbine blade leading edge model where the air coolant is supplied through radial holes and row of cylindrical inclined holes placed inside hemispherical dimples or trench. The blowing factor was varied from 0.5 to 2.0. The model size and key initial parameters for simulation were taken as for a real blade of a high-pressure high-performance gas turbine. Simulation was performed using commercial software code ANSYS CFX. The simulation results were compared with reference variant (no dimples or trench) both for the leading edge area and for the flat plate downstream of the leading edge.

  19. Controlling reactivity of nanoporous catalyst materials by tuning reaction product-pore interior interactions: Statistical mechanical modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Ackerman, David M.; Lin, Victor S.-Y.

    2013-04-02

    Statistical mechanical modeling is performed of a catalytic conversion reaction within a functionalized nanoporous material to assess the effect of varying the reaction product-pore interior interaction from attractive to repulsive. A strong enhancement in reactivity is observed not just due to the shift in reaction equilibrium towards completion but also due to enhanced transport within the pore resulting from reduced loading. The latter effect is strongest for highly restricted transport (single-file diffusion), and applies even for irreversible reactions. The analysis is performed utilizing a generalized hydrodynamic formulation of the reaction-diffusion equations which can reliably capture the complex interplay between reactionmore » and restricted transport.« less

  20. Electrochromic Radiator Coupon Level Testing and Full Scale Thermal Math Modeling for Use on Altair Lunar Lander

    NASA Technical Reports Server (NTRS)

    Sheth, Rubik; Bannon, Erika; Bower, Chad

    2009-01-01

    In order to control system and component temperatures, many spacecraft thermal control systems use a radiator coupled with a pumped fluid loop to reject waste heat from the vehicle. Since heat loads and radiation environments can vary considerably according to mission phase, the thermal control system must be able to vary the heat rejection. The ability to "turn down" the heat rejected from the thermal control system is critically important when designing the system.. Electrochromic technology as a radiator coating is being investigated to vary the amount of heat being rejected by a radiator. Coupon level tests were performed to test the feasibility of the technology. Furthermore, thermal math models were developed to better understand the turndown ratios required by full scale radiator architectures to handle the various operation scenarios during a mission profile for Altair Lunar Lander. This paper summarizes results from coupon level tests as well as thermal math models developed to investigate how electrochromics can be used to provide the largest turn down ratio for a radiator. Data from the various design concepts of radiators and their architectures are outlined. Recommendations are made on which electrochromic radiator concept should be carried further for future thermal vacuum testing.

  1. Electrochromic Radiator Coupon Level Testing and Full Scale Thermal Math Modeling for Use on Altair Lunar Lander

    NASA Technical Reports Server (NTRS)

    Bannon, Erika T.; Bower, Chad E.; Sheth, Rubik; Stephan, Ryan

    2010-01-01

    In order to control system and component temperatures, many spacecraft thermal control systems use a radiator coupled with a pumped fluid loop to reject waste heat from the vehicle. Since heat loads and radiation environments can vary considerably according to mission phase, the thermal control system must be able to vary the heat rejection. The ability to "turn down" the heat rejected from the thermal control system is critically important when designing the system. Electrochromic technology as a radiator coating is being investigated to vary the amount of heat rejected by a radiator. Coupon level tests were performed to test the feasibility of this technology. Furthermore, thermal math models were developed to better understand the turndown ratios required by full scale radiator architectures to handle the various operation scenarios encountered during a mission profile for the Altair Lunar Lander. This paper summarizes results from coupon level tests as well as the thermal math models developed to investigate how electrochromics can be used to increase turn down ratios for a radiator. Data from the various design concepts of radiators and their architectures are outlined. Recommendations are made on which electrochromic radiator concept should be carried further for future thermal vacuum testing.

  2. Brain regions engaged by part- and whole-task performance in a video game: a model-based test of the decomposition hypothesis.

    PubMed

    Anderson, John R; Bothell, Daniel; Fincham, Jon M; Anderson, Abraham R; Poole, Ben; Qin, Yulin

    2011-12-01

    Part- and whole-task conditions were created by manipulating the presence of certain components of the Space Fortress video game. A cognitive model was created for two-part games that could be combined into a model that performed the whole game. The model generated predictions both for behavioral patterns and activation patterns in various brain regions. The activation predictions concerned both tonic activation that was constant in these regions during performance of the game and phasic activation that occurred when there was resource competition. The model's predictions were confirmed about how tonic and phasic activation in different regions would vary with condition. These results support the Decomposition Hypothesis that the execution of a complex task can be decomposed into a set of information-processing components and that these components combine unchanged in different task conditions. In addition, individual differences in learning gains were predicted by individual differences in phasic activation in those regions that displayed highest tonic activity. This individual difference pattern suggests that the rate of learning of a complex skill is determined by capacity limits.

  3. Data driven propulsion system weight prediction model

    NASA Astrophysics Data System (ADS)

    Gerth, Richard J.

    1994-10-01

    The objective of the research was to develop a method to predict the weight of paper engines, i.e., engines that are in the early stages of development. The impetus for the project was the Single Stage To Orbit (SSTO) project, where engineers need to evaluate alternative engine designs. Since the SSTO is a performance driven project the performance models for alternative designs were well understood. The next tradeoff is weight. Since it is known that engine weight varies with thrust levels, a model is required that would allow discrimination between engines that produce the same thrust. Above all, the model had to be rooted in data with assumptions that could be justified based on the data. The general approach was to collect data on as many existing engines as possible and build a statistical model of the engines weight as a function of various component performance parameters. This was considered a reasonable level to begin the project because the data would be readily available, and it would be at the level of most paper engines, prior to detailed component design.

  4. Structural studies of the crystallisation of microporous materials

    NASA Astrophysics Data System (ADS)

    Davies, Andrew Treharne

    A range of powerful synchrotron radiation characterisation techniques have been used to study fundamental aspects of the fonnation of microporous solids, specifically alumi nosilicates, heteroatom substituted aluminophosphates and titanosilicates. This work has been performed with the aim of investigating in situ the structural changes occurring during crystallisation and post synthetic treatment. In situ EDXRD was used to follow the crystallisation of these materials under a wide range of synthesis conditions using a hydrothermal cell and a solid-state detector array. A quantitative analysis of the crystallisation kinetics was performed for the large pore aluminosilicate, zeolite A, using a simple mathematical model to calculate the activation energy of formation. The results obtained were found to closely agree with both the experimental results and theoretical models of others. A qualitative study of the effect of altering the synthesis conditions was also investigated for this material. Similar kinetic studies were then performed for a range of microporous aluminophosphates and their cobalt substituted derivatives in order to follow the effects of varying synthesis conditions such as the synthesis temperature, organic template type, and cobalt concentration. Distinct trends were noted in the formation times, stability and nature of the resulting crystalline phases as conditions were varied. The relationship between the cobalt and organic template molecules during crystallisation was considered in some detail with reference to other experimental data and theoretical models. The alumi nophosphate studies were subsequently extended to a range of other heteroatom substituted aluminophosphates, using in situ EDXRD, complimented by EXAFS, which allowed investigation of the local environments around the heteroatoms within the microporous structure. EDXRD and EXAFS studies have been performed on the microporous titanosilicate, ETS-10, while the thermal stability of this material has also been investigated in situ using synchrotron X-ray diffraction in conjunction with a high temperature environmental cell.

  5. Parametric Evaluation of SiC/SiC Composite Cladding with UO2 Fuel for LWR Applications: Fuel Rod Interactions and Impact of Nonuniform Power Profile in Fuel Rod

    NASA Astrophysics Data System (ADS)

    Singh, G.; Sweet, R.; Brown, N. R.; Wirth, B. D.; Katoh, Y.; Terrani, K.

    2018-02-01

    SiC/SiC composites are candidates for accident tolerant fuel cladding in light water reactors. In the extreme nuclear reactor environment, SiC-based fuel cladding will be exposed to neutron damage, significant heat flux, and a corrosive environment. To ensure reliable and safe operation of accident tolerant fuel cladding concepts such as SiC-based materials, it is important to assess thermo-mechanical performance under in-reactor conditions including irradiation and realistic temperature distributions. The effect of non-uniform dimensional changes caused by neutron irradiation with spatially varying temperatures, along with the closing of the fuel-cladding gap, on the stress development in the cladding over the course of irradiation were evaluated. The effect of non-uniform circumferential power profile in the fuel rod on the mechanical performance of the cladding is also evaluated. These analyses have been performed using the BISON fuel performance modeling code and the commercial finite element analysis code Abaqus. A constitutive model is constructed and solved numerically to predict the stress distribution in the cladding under normal operating conditions. The dependence of dimensions and thermophysical properties on irradiation dose and temperature has been incorporated into the models. Initial scoping results from parametric analyses provide time varying stress distributions in the cladding as well as the interaction of fuel rod with the cladding under different conditions of initial fuel rod-cladding gap and linear heat rate. It is found that a non-uniform circumferential power profile in the fuel rod may cause significant lateral bowing in the cladding, and motivates further analysis and evaluation.

  6. Spatially organizing biochemistry: choosing a strategy to translate synthetic biology to the factory.

    PubMed

    Jakobson, Christopher M; Tullman-Ercek, Danielle; Mangan, Niall M

    2018-05-29

    Natural biochemical systems are ubiquitously organized both in space and time. Engineering the spatial organization of biochemistry has emerged as a key theme of synthetic biology, with numerous technologies promising improved biosynthetic pathway performance. One strategy, however, may produce disparate results for different biosynthetic pathways. We use a spatially resolved kinetic model to explore this fundamental design choice in systems and synthetic biology. We predict that two example biosynthetic pathways have distinct optimal organization strategies that vary based on pathway-dependent and cell-extrinsic factors. Moreover, we demonstrate that the optimal design varies as a function of kinetic and biophysical properties, as well as culture conditions. Our results suggest that organizing biosynthesis has the potential to substantially improve performance, but that choosing the appropriate strategy is key. The flexible design-space analysis we propose can be adapted to diverse biosynthetic pathways, and lays a foundation to rationally choose organization strategies for biosynthesis.

  7. CDTI: Crew Function Assessment

    NASA Technical Reports Server (NTRS)

    Tole, J. R.; Young, L. R.

    1982-01-01

    Man machine interaction often requires the operator to perform a sterotyped scan of instruments to monitor and/or control a system. Situations in which this type of behavior exists, such as instrument flight, scan pattern has been shown to be altered by imposition of simultaneous verbal tasks. The relationship between pilot visual scan of instruments and mental workload was described. A verbal loading task of varying difficulty caused pilots to stare at the primary instrument as the difficulty increased and to shed looks at instruments of less importance. The verbal loading task affected rank ordering of scanning sequences. The behavior of pilots with widely varying skill levels suggested that these effects occur most strongly at lower skill levels and are less apparent at high skill levels. Graphical interpretation of the hypothetical relationship between skill, workload, and performance is introduced and modeling results are presented to support this interpretation.

  8. Parachuting with bristled wings

    NASA Astrophysics Data System (ADS)

    Kasoju, Vishwa; Santhanakrishnan, Arvind; Senter, Michael; Armel, Kristen; Miller, Laura

    2017-11-01

    Free takeoff flight recordings of thrips (body length <1 mm) show that they can intermittently cease flapping and instead float passively downwards by spreading their bristled wings. Such drag-based parachuting can lower the speed of falling and aid in long distance dispersal by minimizing energetic demands needed for active flapping flight. However, the role of bristled wings in parachuting remains unclear. In this study, we examine if using bristled wings lowers drag forces in parachuting as compared to solid (non-bristled) wings. Wing angles and settling velocities were obtained from free takeoff flight videos. A solid wing model and bristled wing model with bristle spacing to diameter ratio of 5 performing translational motion were comparatively examined using a dynamically scaled robotic model. We measured force generated under varying wing angle from 45-75 degrees across a Reynolds number (Re) range of 1 to 15. Drag experienced by the wings decreased in both wing models when varying Re from 1 to 15. Leakiness of flow through bristles, visualized using spanwise PIV, and implications for force generation will be presented. Numerical simulations will be used to investigate the stability of free fall using bristled wings.

  9. A high-frequency warm shallow water acoustic communications channel model and measurements.

    PubMed

    Chitre, Mandar

    2007-11-01

    Underwater acoustic communication is a core enabling technology with applications in ocean monitoring using remote sensors and autonomous underwater vehicles. One of the more challenging underwater acoustic communication channels is the medium-range very shallow warm-water channel, common in tropical coastal regions. This channel exhibits two key features-extensive time-varying multipath and high levels of non-Gaussian ambient noise due to snapping shrimp-both of which limit the performance of traditional communication techniques. A good understanding of the communications channel is key to the design of communication systems. It aids in the development of signal processing techniques as well as in the testing of the techniques via simulation. In this article, a physics-based channel model for the very shallow warm-water acoustic channel at high frequencies is developed, which are of interest to medium-range communication system developers. The model is based on ray acoustics and includes time-varying statistical effects as well as non-Gaussian ambient noise statistics observed during channel studies. The model is calibrated and its accuracy validated using measurements made at sea.

  10. PAB3D Simulations for the CAWAPI F-16XL

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa; Abdol-Hamid, K. S.; Massey, Steven J.

    2007-01-01

    Numerical simulations of the flow around F-16XL are performed as a contribution to the Cranked Arrow Wing Aerodynamic Project International (CAWAPI) using the PAB3D CFD code. Two turbulence models are used in the calculations: a standard k-! model, and the Shih-Zhu-Lumley (SZL) algebraic stress model. Seven flight conditions are simulated for the flow around the F-16XL where the free stream Mach number varies from 0.242 to 0.97. The range of angles of attack varies from 0deg to 20deg. Computational results, surface static pressure, boundary layer velocity profiles, and skin friction are presented and compared with flight data. Numerical results are generally in good agreement with flight data, considering that only one grid resolution is utilized for the different flight conditions simulated in this study. The ASM results are closer to the flight data than the k-! model results. The ASM predicted a stronger primary vortex, however, the origin of the vortex and footprint is approximately the same as in the k-! predictions.

  11. LPV gain-scheduled control of SCR aftertreatment systems

    NASA Astrophysics Data System (ADS)

    Meisami-Azad, Mona; Mohammadpour, Javad; Grigoriadis, Karolos M.; Harold, Michael P.; Franchek, Matthew A.

    2012-01-01

    Hydrocarbons, carbon monoxide and some of other polluting emissions produced by diesel engines are usually lower than those produced by gasoline engines. While great strides have been made in the exhaust aftertreatment of vehicular pollutants, the elimination of nitrogen oxide (NO x ) from diesel vehicles is still a challenge. The primary reason is that diesel combustion is a fuel-lean process, and hence there is significant unreacted oxygen in the exhaust. Selective catalytic reduction (SCR) is a well-developed technology for power plants and has been recently employed for reducing NO x emissions from automotive sources and in particular, heavy-duty diesel engines. In this article, we develop a linear parameter-varying (LPV) feedforward/feedback control design method for the SCR aftertreatment system to decrease NO x emissions while keeping ammonia slippage to a desired low level downstream the catalyst. The performance of the closed-loop system obtained from the interconnection of the SCR system and the output feedback LPV control strategy is then compared with other control design methods including sliding mode, and observer-based static state-feedback parameter-varying control. To reduce the computational complexity involved in the control design process, the number of LPV parameters in the developed quasi-LPV (qLPV) model is reduced by applying the principal component analysis technique. An LPV feedback/feedforward controller is then designed for the qLPV model with reduced number of scheduling parameters. The designed full-order controller is further simplified to a first-order transfer function with a parameter-varying gain and pole. Finally, simulation results using both a low-order model and a high-fidelity and high-order model of SCR reactions in GT-POWER interfaced with MATLAB/SIMULINK illustrate the high NO x conversion efficiency of the closed-loop SCR system using the proposed parameter-varying control law.

  12. Feedforward object-vision models only tolerate small image variations compared to human

    PubMed Central

    Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2014-01-01

    Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986

  13. Robust Damage-Mitigating Control of Aircraft for High Performance and Structural Durability

    NASA Technical Reports Server (NTRS)

    Caplin, Jeffrey; Ray, Asok; Joshi, Suresh M.

    1999-01-01

    This paper presents the concept and a design methodology for robust damage-mitigating control (DMC) of aircraft. The goal of DMC is to simultaneously achieve high performance and structural durability. The controller design procedure involves consideration of damage at critical points of the structure, as well as the performance requirements of the aircraft. An aeroelastic model of the wings has been formulated and is incorporated into a nonlinear rigid-body model of aircraft flight-dynamics. Robust damage-mitigating controllers are then designed using the H(infinity)-based structured singular value (mu) synthesis method based on a linearized model of the aircraft. In addition to penalizing the error between the ideal performance and the actual performance of the aircraft, frequency-dependent weights are placed on the strain amplitude at the root of each wing. Using each controller in turn, the control system is put through an identical sequence of maneuvers, and the resulting (varying amplitude cyclic) stress profiles are analyzed using a fatigue crack growth model that incorporates the effects of stress overload. Comparisons are made to determine the impact of different weights on the resulting fatigue crack damage in the wings. The results of simulation experiments show significant savings in fatigue life of the wings while retaining the dynamic performance of the aircraft.

  14. Predicting the Best Fit: A Comparison of Response Surface Models for Midazolam and Alfentanil Sedation in Procedures With Varying Stimulation.

    PubMed

    Liou, Jing-Yang; Ting, Chien-Kun; Mandell, M Susan; Chang, Kuang-Yi; Teng, Wei-Nung; Huang, Yu-Yin; Tsou, Mei-Yung

    2016-08-01

    Selecting an effective dose of sedative drugs in combined upper and lower gastrointestinal endoscopy is complicated by varying degrees of pain stimulation. We tested the ability of 5 response surface models to predict depth of sedation after administration of midazolam and alfentanil in this complex model. The procedure was divided into 3 phases: esophagogastroduodenoscopy (EGD), colonoscopy, and the time interval between the 2 (intersession). The depth of sedation in 33 adult patients was monitored by Observer Assessment of Alertness/Scores. A total of 218 combinations of midazolam and alfentanil effect-site concentrations derived from pharmacokinetic models were used to test 5 response surface models in each of the 3 phases of endoscopy. Model fit was evaluated with objective function value, corrected Akaike Information Criterion (AICc), and Spearman ranked correlation. A model was arbitrarily defined as accurate if the predicted probability is <0.5 from the observed response. The effect-site concentrations tested ranged from 1 to 76 ng/mL and from 5 to 80 ng/mL for midazolam and alfentanil, respectively. Midazolam and alfentanil had synergistic effects in colonoscopy and EGD, but additivity was observed in the intersession group. Adequate prediction rates were 84% to 85% in the intersession group, 84% to 88% during colonoscopy, and 82% to 87% during EGD. The reduced Greco and Fixed alfentanil concentration required for 50% of the patients to achieve targeted response Hierarchy models performed better with comparable predictive strength. The reduced Greco model had the lowest AICc with strong correlation in all 3 phases of endoscopy. Dynamic, rather than fixed, γ and γalf in the Hierarchy model improved model fit. The reduced Greco model had the lowest objective function value and AICc and thus the best fit. This model was reliable with acceptable predictive ability based on adequate clinical correlation. We suggest that this model has practical clinical value for patients undergoing procedures with varying degrees of stimulation.

  15. Estimating workload using EEG spectral power and ERPs in the n-back task

    NASA Astrophysics Data System (ADS)

    Brouwer, Anne-Marie; Hogervorst, Maarten A.; van Erp, Jan B. F.; Heffelaar, Tobias; Zimmerman, Patrick H.; Oostenveld, Robert

    2012-08-01

    Previous studies indicate that both electroencephalogram (EEG) spectral power (in particular the alpha and theta band) and event-related potentials (ERPs) (in particular the P300) can be used as a measure of mental work or memory load. We compare their ability to estimate workload level in a well-controlled task. In addition, we combine both types of measures in a single classification model to examine whether this results in higher classification accuracy than either one alone. Participants watched a sequence of visually presented letters and indicated whether or not the current letter was the same as the one (n instances) before. Workload was varied by varying n. We developed different classification models using ERP features, frequency power features or a combination (fusion). Training and testing of the models simulated an online workload estimation situation. All our ERP, power and fusion models provide classification accuracies between 80% and 90% when distinguishing between the highest and the lowest workload condition after 2 min. For 32 out of 35 participants, classification was significantly higher than chance level after 2.5 s (or one letter) as estimated by the fusion model. Differences between the models are rather small, though the fusion model performs better than the other models when only short data segments are available for estimating workload.

  16. Modeling Environmental Controls on Tree Water Use at Different Temporal scales

    NASA Astrophysics Data System (ADS)

    Guan, H.; Wang, H.; Simmons, C. T.

    2014-12-01

    Vegetation covers 70% of land surface, significantly influencing water and carbon exchange between land surface and the atmosphere. Vegetation transpiration (Et) contributes 80% of the global terrestrial evapotranspiration, making an adequate illustration of how important vegetation is to any hydrological or climatological applications. Transpiration can be estimated through upscaling from sap flow measurements on selected trees. Alternatively, transpiration (or tree water use for forests) can be correlated with environmental variables or estimated in land surface simulations in which a canopy conductance (gc) model is often used. Transpiration and canopy conductance are constrained by supply and demand control factors. Some previous studies estimated Et and gc considering the stresses from both the supply (soil water condition) and demand (e.g. temperature, vapor pressure deficit, solar radiation) factors, while some only considered the demand controls. In this study, we examined the performance of two types of models at daily and half-hourly scales for transpiration and canopy conductance modelling based on a native species in South Australia. The results show that the significance of soil water condition for Et and gc modelling varies with time scales. The model parameter values also vary across time scales. This result calls for attention in choosing models and parameter values for soil-plant-atmosphere continuum and land surface modeling.

  17. Key performance indicators' assessment to develop best practices in an Emergency Medical Communication Centre.

    PubMed

    Penverne, Yann; Leclere, Brice; Labady, Julien; Berthier, Frederic; Jenvrin, Joel; Javaudin, Francois; Batard, Eric; Montassier, Emmanuel

    2017-05-17

    Emergency Medical Communication Centre (EMCC) represents a pivotal link in the chain of survival for those requiring rapid response for out-of-hospital medical emergencies. Assessing and grading the performance of EMCCs are warranted as it can affect the health and safety of the served population. The aim of our work was to describe the activity on an EMCC and to explore the associations between different key performance indicators. We carried out our prospective observational study in the EMCC of Nantes, France, from 6 June 2011 to 6 June 2015. The EMCC performance was assessed with the following key performance indicators: answered calls, Quality of Service 20 s (QS20), occupation rate and average call duration. A total of 35 073 h of dispatch activity were analysed. 1 488 998 emergency calls were answered. The emergency call incidence varied slightly from 274 to 284 calls/1000 citizens/year between 2011 and 2015. The median occupation rate was 35% (25-44). QS20 was correlated negatively with the occupation rate (Spearman's ρ: -0.78). The structural equation model confirmed that the occupation rate was highly correlated with the QS20 (standardized coefficient: -0.89). For an occupation rate of 26%, the target value estimated by our polynomial model, the probability of achieving a QS20 superior or equal to 95% varied between 56 and 84%. The occupation rate appeared to be the most important factor contributing towards the QS20. Our data will be useful to develop best practices and guidelines in the field of emergency medicine communication centres.

  18. Beyond a Climate-Centric View of Plant Distribution: Edaphic Variables Add Value to Distribution Models

    PubMed Central

    Beauregard, Frieda; de Blois, Sylvie

    2014-01-01

    Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839) covering an extent of ∼55000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study identifies the potential for non-climate aspects of the environment to pose a constraint to range expansion under climate change. PMID:24658097

  19. Beyond a climate-centric view of plant distribution: edaphic variables add value to distribution models.

    PubMed

    Beauregard, Frieda; de Blois, Sylvie

    2014-01-01

    Both climatic and edaphic conditions determine plant distribution, however many species distribution models do not include edaphic variables especially over large geographical extent. Using an exceptional database of vegetation plots (n = 4839) covering an extent of ∼55,000 km2, we tested whether the inclusion of fine scale edaphic variables would improve model predictions of plant distribution compared to models using only climate predictors. We also tested how well these edaphic variables could predict distribution on their own, to evaluate the assumption that at large extents, distribution is governed largely by climate. We also hypothesized that the relative contribution of edaphic and climatic data would vary among species depending on their growth forms and biogeographical attributes within the study area. We modelled 128 native plant species from diverse taxa using four statistical model types and three sets of abiotic predictors: climate, edaphic, and edaphic-climate. Model predictive accuracy and variable importance were compared among these models and for species' characteristics describing growth form, range boundaries within the study area, and prevalence. For many species both the climate-only and edaphic-only models performed well, however the edaphic-climate models generally performed best. The three sets of predictors differed in the spatial information provided about habitat suitability, with climate models able to distinguish range edges, but edaphic models able to better distinguish within-range variation. Model predictive accuracy was generally lower for species without a range boundary within the study area and for common species, but these effects were buffered by including both edaphic and climatic predictors. The relative importance of edaphic and climatic variables varied with growth forms, with trees being more related to climate whereas lower growth forms were more related to edaphic conditions. Our study identifies the potential for non-climate aspects of the environment to pose a constraint to range expansion under climate change.

  20. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2004-12-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  1. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2005-01-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  2. Mathematical model of an air-filled alpha stirling refrigerator

    NASA Astrophysics Data System (ADS)

    McFarlane, Patrick; Semperlotti, Fabio; Sen, Mihir

    2013-10-01

    This work develops a mathematical model for an alpha Stirling refrigerator with air as the working fluid and will be useful in optimizing the mechanical design of these machines. Two pistons cyclically compress and expand air while moving sinusoidally in separate chambers connected by a regenerator, thus creating a temperature difference across the system. A complete non-linear mathematical model of the machine, including air thermodynamics, and heat transfer from the walls, as well as heat transfer and fluid resistance in the regenerator, is developed. Non-dimensional groups are derived, and the mathematical model is numerically solved. The heat transfer and work are found for both chambers, and the coefficient of performance of each chamber is calculated. Important design parameters are varied and their effect on refrigerator performance determined. This sensitivity analysis, which shows what the significant parameters are, is a useful tool for the design of practical Stirling refrigeration systems.

  3. Integrated healthcare networks' performance: a growth curve modeling approach.

    PubMed

    Wan, Thomas T H; Wang, Bill B L

    2003-05-01

    This study examines the effects of integration on the performance ratings of the top 100 integrated healthcare networks (IHNs) in the United States. A strategic-contingency theory is used to identify the relationship of IHNs' performance to their structural and operational characteristics and integration strategies. To create a database for the panel study, the top 100 IHNs selected by the SMG Marketing Group in 1998 were followed up in 1999 and 2000. The data were merged with the Dorenfest data on information system integration. A growth curve model was developed and validated by the Mplus statistical program. Factors influencing the top 100 IHNs' performance in 1998 and their subsequent rankings in the consecutive years were analyzed. IHNs' initial performance scores were positively influenced by network size, number of affiliated physicians and profit margin, and were negatively associated with average length of stay and technical efficiency. The continuing high performance, judged by maintaining higher performance scores, tended to be enhanced by the use of more managerial or executive decision-support systems. Future studies should include time-varying operational indicators to serve as predictors of network performance.

  4. Inter-comparison of time series models of lake levels predicted by several modeling strategies

    NASA Astrophysics Data System (ADS)

    Khatibi, R.; Ghorbani, M. A.; Naghipour, L.; Jothiprakash, V.; Fathima, T. A.; Fazelifard, M. H.

    2014-04-01

    Five modeling strategies are employed to analyze water level time series of six lakes with different physical characteristics such as shape, size, altitude and range of variations. The models comprise chaos theory, Auto-Regressive Integrated Moving Average (ARIMA) - treated for seasonality and hence SARIMA, Artificial Neural Networks (ANN), Gene Expression Programming (GEP) and Multiple Linear Regression (MLR). Each is formulated on a different premise with different underlying assumptions. Chaos theory is elaborated in a greater detail as it is customary to identify the existence of chaotic signals by a number of techniques (e.g. average mutual information and false nearest neighbors) and future values are predicted using the Nonlinear Local Prediction (NLP) technique. This paper takes a critical view of past inter-comparison studies seeking a superior performance, against which it is reported that (i) the performances of all five modeling strategies vary from good to poor, hampering the recommendation of a clear-cut predictive model; (ii) the performances of the datasets of two cases are consistently better with all five modeling strategies; (iii) in other cases, their performances are poor but the results can still be fit-for-purpose; (iv) the simultaneous good performances of NLP and SARIMA pull their underlying assumptions to different ends, which cannot be reconciled. A number of arguments are presented including the culture of pluralism, according to which the various modeling strategies facilitate an insight into the data from different vantages.

  5. Modelling invasion for a habitat generalist and a specialist plant species

    USGS Publications Warehouse

    Evangelista, P.H.; Kumar, S.; Stohlgren, T.J.; Jarnevich, C.S.; Crall, A.W.; Norman, J. B.; Barnett, D.T.

    2008-01-01

    Predicting suitable habitat and the potential distribution of invasive species is a high priority for resource managers and systems ecologists. Most models are designed to identify habitat characteristics that define the ecological niche of a species with little consideration to individual species' traits. We tested five commonly used modelling methods on two invasive plant species, the habitat generalist Bromus tectorum and habitat specialist Tamarix chinensis, to compare model performances, evaluate predictability, and relate results to distribution traits associated with each species. Most of the tested models performed similarly for each species; however, the generalist species proved to be more difficult to predict than the specialist species. The highest area under the receiver-operating characteristic curve values with independent validation data sets of B. tectorum and T. chinensis was 0.503 and 0.885, respectively. Similarly, a confusion matrix for B. tectorum had the highest overall accuracy of 55%, while the overall accuracy for T. chinensis was 85%. Models for the generalist species had varying performances, poor evaluations, and inconsistent results. This may be a result of a generalist's capability to persist in a wide range of environmental conditions that are not easily defined by the data, independent variables or model design. Models for the specialist species had consistently strong performances, high evaluations, and similar results among different model applications. This is likely a consequence of the specialist's requirement for explicit environmental resources and ecological barriers that are easily defined by predictive models. Although defining new invaders as generalist or specialist species can be challenging, model performances and evaluations may provide valuable information on a species' potential invasiveness.

  6. Validation of a Fast-Response Urban Micrometeorological Model to Assess the Performance of Urban Heat Island Mitigation Strategies

    NASA Astrophysics Data System (ADS)

    Nadeau, D.; Girard, P.; Overby, M.; Pardyjak, E.; Stoll, R., II; Willemsen, P.; Bailey, B.; Parlange, M. B.

    2015-12-01

    Urban heat islands (UHI) are a real threat in many cities worldwide and mitigation measures have become a central component of urban planning strategies. Even within a city, causes of UHI vary from one neighborhood to another, mostly due the spatial variability in surface thermal properties, building geometry, anthropogenic heat flux releases and vegetation cover. As a result, the performance of UHI mitigation measures also varies in space. Hence, there is a need to develop a tool to quantify the efficiency of UHI mitigation measures at the neighborhood scale. The objective of this ongoing study is to validate the fast-response micrometeorological model QUIC EnvSim (QES). This model can provide all information required for UHI studies with a fine spatial resolution (up to 0.5m) and short computation time. QES combines QUIC, a CFD-based wind solver and dispersion model, and EnvSim, composed of a radiation model, a land-surface model and a turbulent transport model. Here, high-resolution (1 m) simulations are run over a subset of the École Polytechnique Fédérale de Lausanne (EPFL) campus including complex buildings, various surfaces properties and vegetation. For nearly five months in 2006-07, a dense network of meteorological observations (92 weather stations over 0.1 km2) was deployed over the campus and these unique data are used here as a validation dataset. We present validation results for different test cases (e.g., sunny vs cloudy days, different incoming wind speeds and directions) and explore the effect of a few UHI mitigation strategies on the spatial distribution of near-surface air temperatures. Preliminary results suggest that QES may be a valuable tool in decision-making regarding adaptation of urban planning to UHI.

  7. A global search inversion for earthquake kinematic rupture history: Application to the 2000 western Tottori, Japan earthquake

    USGS Publications Warehouse

    Piatanesi, A.; Cirella, A.; Spudich, P.; Cocco, M.

    2007-01-01

    We present a two-stage nonlinear technique to invert strong motions records and geodetic data to retrieve the rupture history of an earthquake on a finite fault. To account for the actual rupture complexity, the fault parameters are spatially variable peak slip velocity, slip direction, rupture time and risetime. The unknown parameters are given at the nodes of the subfaults, whereas the parameters within a subfault are allowed to vary through a bilinear interpolation of the nodal values. The forward modeling is performed with a discrete wave number technique, whose Green's functions include the complete response of the vertically varying Earth structure. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage (appraisal), the algorithm performs a statistical analysis of the model ensemble and computes a weighted mean model and its standard deviation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. We present some synthetic tests to show the effectiveness of the method and its robustness to uncertainty of the adopted crustal model. Finally, we apply this inverse technique to the well recorded 2000 western Tottori, Japan, earthquake (Mw 6.6); we confirm that the rupture process is characterized by large slip (3-4 m) at very shallow depths but, differently from previous studies, we imaged a new slip patch (2-2.5 m) located deeper, between 14 and 18 km depth. Copyright 2007 by the American Geophysical Union.

  8. Probing the free energy landscape of the FBP28WW domain using multiple techniques.

    PubMed

    Periole, Xavier; Allen, Lucy R; Tamiola, Kamil; Mark, Alan E; Paci, Emanuele

    2009-05-01

    The free-energy landscape of a small protein, the FBP 28 WW domain, has been explored using molecular dynamics (MD) simulations with alternative descriptions of the molecule. The molecular models used range from coarse-grained to all-atom with either an implicit or explicit treatment of the solvent. Sampling of conformation space was performed using both conventional and temperature-replica exchange MD simulations. Experimental chemical shifts and NOEs were used to validate the simulations, and experimental phi values both for validation and as restraints. This combination of different approaches has provided insight into the free energy landscape and barriers encountered by the protein during folding and enabled the characterization of native, denatured and transition states which are compatible with the available experimental data. All the molecular models used stabilize well defined native and denatured basins; however, the degree of agreement with the available experimental data varies. While the most detailed, explicit solvent model predicts the data reasonably accurately, it does not fold despite a simulation time 10 times that of the experimental folding time. The less detailed models performed poorly relative to the explicit solvent model: an implicit solvent model stabilizes a ground state which differs from the experimental native state, and a structure-based model underestimates the size of the barrier between the two states. The use of experimental phi values both as restraints, and to extract structures from unfolding simulations, result in conformations which, although not necessarily true transition states, appear to share the geometrical characteristics of transition state structures. In addition to characterizing the native, transition and denatured states of this particular system in this work, the advantages and limitations of using varying levels of representation are discussed. 2008 Wiley Periodicals, Inc.

  9. Size invariance does not hold for connectionist models: dangers of using a toy model.

    PubMed

    Yamaguchi, Makoto

    2004-03-01

    Connectionist models with backpropagation learning rule are known to have a serious problem called catastrophic interference or forgetting, although there have been several reports showing that the interference can be relatively mild with orthogonal inputs. The present study investigated the extent of interference using orthogonal inputs with varying network sizes. One would naturally assume that results obtained from small networks could be extrapolated for larger networks. Unexpectedly, the use of small networks was shown to worsen performance. This result has important implications for interpreting some data in the literature and cautions against the use of a toy model. Copyright 2004 Lippincott Williams & Wilkins

  10. Evaluation of internal noise methods for Hotelling observers

    NASA Astrophysics Data System (ADS)

    Zhang, Yani; Pham, Binh T.; Eckstein, Miguel P.

    2005-04-01

    Including internal noise in computer model observers to degrade model observer performance to human levels is a common method to allow for quantitatively comparisons of human and model performance. In this paper, we studied two different types of methods for injecting internal noise to Hotelling model observers. The first method adds internal noise to the output of the individual channels: a) Independent non-uniform channel noise, b) Independent uniform channel noise. The second method adds internal noise to the decision variable arising from the combination of channel responses: a) internal noise standard deviation proportional to decision variable's standard deviation due to the external noise, b) internal noise standard deviation proportional to decision variable's variance caused by the external noise. We tested the square window Hotelling observer (HO), channelized Hotelling observer (CHO), and Laguerre-Gauss Hotelling observer (LGHO). The studied task was detection of a filling defect of varying size/shape in one of four simulated arterial segment locations with real x-ray angiography backgrounds. Results show that the internal noise method that leads to the best prediction of human performance differs across the studied models observers. The CHO model best predicts human observer performance with the channel internal noise. The HO and LGHO best predict human observer performance with the decision variable internal noise. These results might help explain why previous studies have found different results on the ability of each Hotelling model to predict human performance. Finally, the present results might guide researchers with the choice of method to include internal noise into their Hotelling models.

  11. A rigorous test of the accuracy of USGS digital elevation models in forested areas of Oregon and Washington.

    Treesearch

    Ward W. Carson; Stephen E. Reutebuch

    1997-01-01

    A procedure for performing a rigorous test of elevational accuracy of DEMs using independent ground coordinate data digitized photogrammetrically from aerial photography is presented. The accuracy of a sample set of 23 DEMs covering National Forests in Oregon and Washington was evaluated. Accuracy varied considerably between eastern and western parts of Oregon and...

  12. Applying Item Response Theory to the Development of a Screening Adaptation of the Goldman-Fristoe Test of Articulation-Second Edition

    ERIC Educational Resources Information Center

    Brackenbury, Tim; Zickar, Michael J.; Munson, Benjamin; Storkel, Holly L.

    2017-01-01

    Purpose: Item response theory (IRT) is a psychometric approach to measurement that uses latent trait abilities (e.g., speech sound production skills) to model performance on individual items that vary by difficulty and discrimination. An IRT analysis was applied to preschoolers' productions of the words on the Goldman-Fristoe Test of…

  13. Sources, Properties, Aging, and Anthropogenic Influences on OA and SOA over the Southeast US and the Amazon duing SOAS, DC3, SEAC4RS, and GoAmazon

    EPA Science Inventory

    The SE US and the Amazon have large sources of biogenic VOCs, varying anthropogenic pollution impacts, and often poor organic aerosol (OA) model performance. Recent results on the sources, properties, aging, and impact of anthropogenic pollution on OA and secondary OA (SOA) over ...

  14. Flow Physics of Synthetic Jet Interactions on a Sweptback Model with a Control Surface

    NASA Astrophysics Data System (ADS)

    Monastero, Marianne; Amitay, Michael

    2016-11-01

    Active flow control using synthetic jets can be used on aerodynamic surfaces to improve performance and increase fuel efficiency. The flowfield resulting from the interaction of the jets with a separated crossflow with a spanwise component must be understood to determine actuator spacing for aircraft integration. The current and previous work showed adjacent synthetic jets located upstream of a control surface hingeline on a sweptback model interact with each other under certain conditions. Whether these interactions are constructive or destructive is dependent on the spanwise spacing of the jets, the severity of separation over the control surface, and the magnitude of the spanwise flow. Measuring and understanding the detailed flow physics of the flow structures emanating from the synthetic jet orifices and their interactions with adjacent jets of varying spacings is the focus of this work. Wind tunnel experiments were conducted at the Rensselaer Polytechnic Institute Subsonic Wind Tunnel using stereo particle image velocimetry (SPIV) and pressure measurements to study the effect that varying the spanwise spacing has on the overall performance. Initial SPIV data gave insight into defining and understanding the mechanisms behind the beneficial or detrimental jets interactions.

  15. A Framework to Analyze the Performance of Load Balancing Schemes for Ensembles of Stochastic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Tae-Hyuk; Sandu, Adrian; Watson, Layne T.

    2015-08-01

    Ensembles of simulations are employed to estimate the statistics of possible future states of a system, and are widely used in important applications such as climate change and biological modeling. Ensembles of runs can naturally be executed in parallel. However, when the CPU times of individual simulations vary considerably, a simple strategy of assigning an equal number of tasks per processor can lead to serious work imbalances and low parallel efficiency. This paper presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms for ensembles of simulations where many tasks are mapped onto each processor, andmore » where the individual compute times vary considerably among tasks. Four load balancing strategies are discussed: most-dividing, all-redistribution, random-polling, and neighbor-redistribution. Simulation results with a stochastic budding yeast cell cycle model are consistent with the theoretical analysis. It is especially significant that there is a provable global decrease in load imbalance for the local rebalancing algorithms due to scalability concerns for the global rebalancing algorithms. The overall simulation time is reduced by up to 25 %, and the total processor idle time by 85 %.« less

  16. Modeling of diesel/CNG mixing in a pre-injection chamber

    NASA Astrophysics Data System (ADS)

    Abdul-Wahhab, H. A.; Aziz, A. R. A.; Al-Kayiem, H. H.; Nasif, M. S.

    2015-12-01

    Diesel engines performance can be improved by adding combustible gases to the liquid diesel. In this paper, the propagation of a two phase flow liquid-gas fuel mixture into a pre-mixer is investigated numerically by computational fluid dynamics simulation. CNG was injected into the diesel within a cylindrical conduit operates as pre-mixer. Four injection models of Diesel-CNG were simulated using ANSYS-FLUENT commercial software. Two CNG jet diameters were used of 1 and 2 mm and the diesel pipe diameter was 9 mm. Two configurations were considered for the gas injection. In the first the gas was injected from one side while for the second two side entries were used. The CNG to Diesel pressure ratio was varied between 1.5 and 3. The CNG to Diesel mass flow ratios were varied between 0.7 and 0.9. The results demonstrate that using double-sided injection increased the homogeneity of the mixture due to the swirl and acceleration of the mixture. Mass fraction, in both cases, was found to increase as the mixture flows towards the exit. As a result, this enhanced mixing is likely to lead to improvement in the combustion performance.

  17. Application of a three-dimensional hydrodynamic model to the Himmerfjärden, Baltic Sea

    NASA Astrophysics Data System (ADS)

    Sokolov, Alexander

    2014-05-01

    Himmerfjärden is a coastal fjord-like bay situated in the north-western part of the Baltic Sea. The fjord has a mean depth of 17 m and a maximum depth of 52 m. The water is brackish (6 psu) with small salinity fluctuation (±2 psu). A sewage treatment plant, which serves about 300 000 people, discharges into the inner part of Himmerfjärden. This area is the subject of a long-term monitoring program. We are planning to develop a publicly available modelling system for this area, which will perform short-term forecast predictions of pertinent parameters (e.g., water-levels, currents, salinity, temperature) and disseminate them to users. A key component of the system is a three-dimensional hydrodynamic model. The open source Delft3D Flow system (http://www.deltaressystems.com/hydro) has been applied to model the Himmerfjärden area. Two different curvilinear grids were used to approximate the modelling domain (25 km × 50 km × 60 m). One grid has low horizontal resolution (cell size varies from 250 to 450 m) to perform long-term numerical experiments (modelling period of several months), while another grid has higher resolution (cell size varies from 120 to 250 m) to model short-term situations. In vertical direction both z-level (50 layers) and sigma coordinate (20 layers) were used. Modelling results obtained with different horizontal resolution and vertical discretisation will be presented. This model will be a part of the operational system which provides automated integration of data streams from several information sources: meteorological forecast based on the HIRLAM model from the Finnish Meteorological Institute (https://en.ilmatieteenlaitos.fi/open-data), oceanographic forecast based on the HIROMB-BOOS Model developed within the Baltic community and provided by the MyOcean Project (http://www.myocean.eu), riverine discharge from the HYPE model provided by the Swedish Meteorological Hydrological Institute (http://vattenwebb.smhi.se/modelarea/).

  18. Integrated model reference adaptive control and time-varying angular rate estimation for micro-machined gyroscopes

    NASA Astrophysics Data System (ADS)

    Tsai, Nan-Chyuan; Sue, Chung-Yang

    2010-02-01

    Owing to the imposed but undesired accelerations such as quadrature error and cross-axis perturbation, the micro-machined gyroscope would not be unconditionally retained at resonant mode. Once the preset resonance is not sustained, the performance of the micro-gyroscope is accordingly degraded. In this article, a direct model reference adaptive control loop which is integrated with a modified disturbance estimating observer (MDEO) is proposed to guarantee the resonant oscillations at drive mode and counterbalance the undesired disturbance mainly caused by quadrature error and cross-axis perturbation. The parameters of controller are on-line innovated by the dynamic error between the MDEO output and expected response. In addition, Lyapunov stability theory is employed to examine the stability of the closed-loop control system. Finally, the efficacy of numerical evaluation on the exerted time-varying angular rate, which is to be detected and measured by the gyroscope, is verified by intensive simulations.

  19. EXPERIMENTAL AND ANALYTICAL STUDIES OF REFLECTRO CONTROL FOR THE ADVANCED ENGINEERING TEST REACTOR. PART A. EXPERIMENTAL STUDIES WITH THE REFLECTOR CONTROL SYSTEM MODEL. PART B. ANALYTICAL STUDIES OF REFLECTOR CONTROL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertelson, P.C.; Francis, T.L.

    1959-10-21

    Studies of reflector control for the Advanced Engineering Test Reactor were made. The performance of various parts of the reflector control system model such as the safety reflector and the water jet educator, boric acid injection, and demineralizer systems is discussed. The experimental methods and results obtained are discussed. Four reflector control schemes were studied. The schemes were a single-region and three-region reflector schemes two separate reflectors, and two connected reflectors. Calculations were made of shim and safety reflector worth for a variety of parameters. Safety reflector thickness was varied from 7.75 to 0 inches, with and without boron. Boricmore » acid concentration was varied from 100 to 2% of saturation in the shim reflectors. Neutron flux plots are presented (C.J.G.)« less

  20. Transonic wind tunnel tests of A.015 scale space shuttle orbiter model, volume 1

    NASA Technical Reports Server (NTRS)

    Struzynski, N. A.

    1975-01-01

    Transonic wind tunnel tests were run on a 0.015 scale model of the Space Shuttle Orbiter Vehicle in an eight-foot tunnel during August 1975. The purpose of the program was to obtain basic shuttle aerodynamic data through a full range of elevon and aileron deflections, verification of data obtained at other facilities, and effects of Reynolds numbers. The first part of a discussion of test procedures and results in both tabular and graphical form were presented. Tests were performed at Mach numbers from 0.35 to 1.20, and at Reynolds numbers for 3.5 million to 8.2 million per foot. The angle of attack was varied from -1 to +20 degrees at sideslip angles of -2, 0, +2 degrees. Sideslip was varied from -6 to +8 degrees at constant angles of attack from 0 to +20 degrees. Various aileron and ailevon settings were tested for various angles of attack.

  1. Transonic wind tunnel tests of a .015 scale space shuttle orbiter model, volume 2

    NASA Technical Reports Server (NTRS)

    Struzynski, N. A.

    1975-01-01

    Transonic wind tunnel tests were run on a 0.015 scale model of the Space Shuttle Orbiter Vehicle in an eight-foot tunnel during August 1975. The purpose of the program was to obtain basic shuttle aerodynamic data through a full range of elevon and aileron deflections, verification of data obtained at other facilities, and effects of Reynolds numbers. The second part of a discussion of test procedures and results in both tabular and graphical form were presented. Tests were performed at Mach numbers from 0.35 to 1.20, and at Reynolds numbers from 3.5 million to 8.2 million per foot. The angle of attack was varied from -2 to +20 degrees at sideslip angles of -2, 0, +2 degrees. Sideslip was varied from -6 to +8 degrees at constant angles of attack from 0 to +20 degrees. Various aileron and ailevon settings were tested for various angles of attack.

  2. Modeling Interdependent and Periodic Real-World Action Sequences

    PubMed Central

    Kurashima, Takeshi; Althoff, Tim; Leskovec, Jure

    2018-01-01

    Mobile health applications, including those that track activities such as exercise, sleep, and diet, are becoming widely used. Accurately predicting human actions in the real world is essential for targeted recommendations that could improve our health and for personalization of these applications. However, making such predictions is extremely difficult due to the complexities of human behavior, which consists of a large number of potential actions that vary over time, depend on each other, and are periodic. Previous work has not jointly modeled these dynamics and has largely focused on item consumption patterns instead of broader types of behaviors such as eating, commuting or exercising. In this work, we develop a novel statistical model, called TIPAS, for Time-varying, Interdependent, and Periodic Action Sequences. Our approach is based on personalized, multivariate temporal point processes that model time-varying action propensities through a mixture of Gaussian intensities. Our model captures short-term and long-term periodic interdependencies between actions through Hawkes process-based self-excitations. We evaluate our approach on two activity logging datasets comprising 12 million real-world actions (e.g., eating, sleep, and exercise) taken by 20 thousand users over 17 months. We demonstrate that our approach allows us to make successful predictions of future user actions and their timing. Specifically, TIPAS improves predictions of actions, and their timing, over existing methods across multiple datasets by up to 156%, and up to 37%, respectively. Performance improvements are particularly large for relatively rare and periodic actions such as walking and biking, improving over baselines by up to 256%. This demonstrates that explicit modeling of dependencies and periodicities in real-world behavior enables successful predictions of future actions, with implications for modeling human behavior, app personalization, and targeting of health interventions. PMID:29780977

  3. Dose-dependent model of caffeine effects on human vigilance during total sleep deprivation.

    PubMed

    Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Wesensten, Nancy J; Kamimori, Gary H; Balkin, Thomas J; Reifman, Jaques

    2014-10-07

    Caffeine is the most widely consumed stimulant to counter sleep-loss effects. While the pharmacokinetics of caffeine in the body is well-understood, its alertness-restoring effects are still not well characterized. In fact, mathematical models capable of predicting the effects of varying doses of caffeine on objective measures of vigilance are not available. In this paper, we describe a phenomenological model of the dose-dependent effects of caffeine on psychomotor vigilance task (PVT) performance of sleep-deprived subjects. We used the two-process model of sleep regulation to quantify performance during sleep loss in the absence of caffeine and a dose-dependent multiplier factor derived from the Hill equation to model the effects of single and repeated caffeine doses. We developed and validated the model fits and predictions on PVT lapse (number of reaction times exceeding 500 ms) data from two separate laboratory studies. At the population-average level, the model captured the effects of a range of caffeine doses (50-300 mg), yielding up to a 90% improvement over the two-process model. Individual-specific caffeine models, on average, predicted the effects up to 23% better than population-average caffeine models. The proposed model serves as a useful tool for predicting the dose-dependent effects of caffeine on the PVT performance of sleep-deprived subjects and, therefore, can be used for determining caffeine doses that optimize the timing and duration of peak performance. Published by Elsevier Ltd.

  4. Profile local linear estimation of generalized semiparametric regression model for longitudinal data.

    PubMed

    Sun, Yanqing; Sun, Liuquan; Zhou, Jie

    2013-07-01

    This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.

  5. Numerical Solutions for the CAWAPI Configuration on Structured Grids at NASA LaRC, United States. Chapter 7

    NASA Technical Reports Server (NTRS)

    Elmiligui, Alaa A.; Abdol-Hamid, Khaled S.; Massey, Steven J.

    2009-01-01

    In this chapter numerical simulations of the flow around F-16XL are performed as a contribution to the Cranked Arrow Wing Aerodynamic Project International (CAWAPI) using the PAB3D CFD code. Two turbulence models are used in the calculations: a standard k-epsilon model, and the Shih-Zhu-Lumley (SZL) algebraic stress model. Seven flight conditions are simulated for the flow around the F-16XL where the free stream Mach number varies from 0.242 to 0.97. The range of angles of attack varies from 0 deg to 20 deg. Computational results, surface static pressure, boundary layer velocity profiles, and skin friction are presented and compared with flight data. Numerical results are generally in good agreement with flight data, considering that only one grid resolution is utilized for the different flight conditions simulated in this study. The Algebraic Stress Model (ASM) results are closer to the flight data than the k-epsilon model results. The ASM predicted a stronger primary vortex, however, the origin of the vortex and footprint is approximately the same as in the k-epsilon predictions.

  6. Computer modeling of high-voltage solar array experiment using the NASCAP/LEO (NASA Charging Analyzer Program/Low Earth Orbit) computer code

    NASA Astrophysics Data System (ADS)

    Reichl, Karl O., Jr.

    1987-06-01

    The relationship between the Interactions Measurement Payload for Shuttle (IMPS) flight experiment and the low Earth orbit plasma environment is discussed. Two interactions (parasitic current loss and electrostatic discharge on the array) may be detrimental to mission effectiveness. They result from the spacecraft's electrical potentials floating relative to plasma ground to achieve a charge flow equilibrium into the spacecraft. The floating potentials were driven by external biases applied to a solar array module of the Photovoltaic Array Space Power (PASP) experiment aboard the IMPS test pallet. The modeling was performed using the NASA Charging Analyzer Program/Low Earth Orbit (NASCAP/LEO) computer code which calculates the potentials and current collection of high-voltage objects in low Earth orbit. Models are developed by specifying the spacecraft, environment, and orbital parameters. Eight IMPS models were developed by varying the array's bias voltage and altering its orientation relative to its motion. The code modeled a typical low Earth equatorial orbit. NASCAP/LEO calculated a wide variety of possible floating potential and current collection scenarios. These varied directly with both the array bias voltage and with the vehicle's orbital orientation.

  7. Model benchmarking and reference signals for angled-beam shear wave ultrasonic nondestructive evaluation (NDE) inspections

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Hopkins, Deborah; Datuin, Marvin; Warchol, Mark; Warchol, Lyudmila; Forsyth, David S.; Buynak, Charlie; Lindgren, Eric A.

    2017-02-01

    For model benchmark studies, the accuracy of the model is typically evaluated based on the change in response relative to a selected reference signal. The use of a side drilled hole (SDH) in a plate was investigated as a reference signal for angled beam shear wave inspection for aircraft structure inspections of fastener sites. Systematic studies were performed with varying SDH depth and size, and varying the ultrasonic probe frequency, focal depth, and probe height. Increased error was observed with the simulation of angled shear wave beams in the near-field. Even more significant, asymmetry in real probes and the inherent sensitivity of signals in the near-field to subtle test conditions were found to provide a greater challenge with achieving model agreement. To achieve quality model benchmark results for this problem, it is critical to carefully align the probe with the part geometry, to verify symmetry in probe response, and ideally avoid using reference signals from the near-field response. Suggested reference signals for angled beam shear wave inspections include using the `through hole' corner specular reflection signal and the full skip' signal off of the far wall from the side drilled hole.

  8. Efficient multidimensional regularization for Volterra series estimation

    NASA Astrophysics Data System (ADS)

    Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan

    2018-05-01

    This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.

  9. Prediction of blood pressure and blood flow in stenosed renal arteries using CFD

    NASA Astrophysics Data System (ADS)

    Jhunjhunwala, Pooja; Padole, P. M.; Thombre, S. B.; Sane, Atul

    2018-04-01

    In the present work an attempt is made to develop a diagnostive tool for renal artery stenosis (RAS) which is inexpensive and in-vitro. To analyse the effects of increase in the degree of severity of stenosis on hypertension and blood flow, haemodynamic parameters are studied by performing numerical simulations. A total of 16 stenosed models with varying degree of stenosis severity from 0-97.11% are assessed numerically. Blood is modelled as a shear-thinning, non-Newtonian fluid using the Carreau model. Computational Fluid Dynamics (CFD) analysis is carried out to compute the values of flow parameters like maximum velocity and maximum pressure attained by blood due to stenosis under pulsatile flow. These values are further used to compute the increase in blood pressure and decrease in available blood flow to kidney. The computed available blood flow and secondary hypertension for varying extent of stenosis are mapped by curve fitting technique using MATLAB and a mathematical model is developed. Based on these mathematical models, a quantification tool is developed for tentative prediction of probable availability of blood flow to the kidney and severity of stenosis if secondary hypertension is known.

  10. Surface functional groups in capacitive deionization with porous carbon electrodes

    NASA Astrophysics Data System (ADS)

    Hemmatifar, Ali; Oyarzun, Diego I.; Palko, James W.; Hawks, Steven A.; Stadermann, Michael; Santiago, Juan G.; Stanford Microfluidics Lab Team; Lawrence Livermore National Lab Team

    2017-11-01

    Capacitive deionization (CDI) is a promising technology for removal of toxic ions and salt from water. In CDI, an applied potential of about 1 V to pairs of porous electrodes (e.g. activated carbon) induces ion electromigration and electrostatic adsorption at electrode surfaces. Immobile surface functional groups play a critical role in the type and capacity of ion adsorption, and this can dramatically change desalination performance. We here use models and experiments to study weak electrolyte surface groups which protonate and/or depropotante based on their acid/base dissociation constants and local pore pH. Net chemical surface charge and differential capacitance can thus vary during CDI operation. In this work, we present a CDI model based on weak electrolyte acid/base equilibria theory. Our model incorporates preferential cation (anion) adsorption for activated carbon with acidic (basic) surface groups. We validated our model with experiments on custom built CDI cells with a variety of functionalizations. To this end, we varied electrolyte pH and measured adsorption of individual anionic and cationic ions using inductively coupled plasma mass spectrometry (ICP-MS) and ion chromatography (IC) techniques. Our model shows good agreement with experiments and provides a framework useful in the design of CDI control schemes.

  11. Composite load spectra for select space propulsion structural components

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Kurth, R. E.; Ho, H.

    1986-01-01

    A multiyear program is performed with the objective to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen (LOX) posts. Progress of the first year's effort includes completion of a sufficient portion of each task -- probabilistic models, code development, validation, and an initial operational code. This code has from its inception an expert system philosophy that could be added to throughout the program and in the future. The initial operational code is only applicable to turbine blade type loadings. The probabilistic model included in the operational code has fitting routines for loads that utilize a modified Discrete Probabilistic Distribution termed RASCAL, a barrier crossing method and a Monte Carlo method. An initial load model was developed by Battelle that is currently used for the slowly varying duty cycle type loading. The intent is to use the model and related codes essentially in the current form for all loads that are based on measured or calculated data that have followed a slowly varying profile.

  12. Study on individual stochastic model of GNSS observations for precise kinematic applications

    NASA Astrophysics Data System (ADS)

    Próchniewicz, Dominik; Szpunar, Ryszard

    2015-04-01

    The proper definition of mathematical positioning model, which is defined by functional and stochastic models, is a prerequisite to obtain the optimal estimation of unknown parameters. Especially important in this definition is realistic modelling of stochastic properties of observations, which are more receiver-dependent and time-varying than deterministic relationships. This is particularly true with respect to precise kinematic applications which are characterized by weakening model strength. In this case, incorrect or simplified definition of stochastic model causes that the performance of ambiguity resolution and accuracy of position estimation can be limited. In this study we investigate the methods of describing the measurement noise of GNSS observations and its impact to derive precise kinematic positioning model. In particular stochastic modelling of individual components of the variance-covariance matrix of observation noise performed using observations from a very short baseline and laboratory GNSS signal generator, is analyzed. Experimental test results indicate that the utilizing the individual stochastic model of observations including elevation dependency and cross-correlation instead of assumption that raw measurements are independent with the same variance improves the performance of ambiguity resolution as well as rover positioning accuracy. This shows that the proposed stochastic assessment method could be a important part in complex calibration procedure of GNSS equipment.

  13. Controlling for seasonal patterns and time varying confounders in time-series epidemiological models: a simulation study.

    PubMed

    Perrakis, Konstantinos; Gryparis, Alexandros; Schwartz, Joel; Le Tertre, Alain; Katsouyanni, Klea; Forastiere, Francesco; Stafoggia, Massimo; Samoli, Evangelia

    2014-12-10

    An important topic when estimating the effect of air pollutants on human health is choosing the best method to control for seasonal patterns and time varying confounders, such as temperature and humidity. Semi-parametric Poisson time-series models include smooth functions of calendar time and weather effects to control for potential confounders. Case-crossover (CC) approaches are considered efficient alternatives that control seasonal confounding by design and allow inclusion of smooth functions of weather confounders through their equivalent Poisson representations. We evaluate both methodological designs with respect to seasonal control and compare spline-based approaches, using natural splines and penalized splines, and two time-stratified CC approaches. For the spline-based methods, we consider fixed degrees of freedom, minimization of the partial autocorrelation function, and general cross-validation as smoothing criteria. Issues of model misspecification with respect to weather confounding are investigated under simulation scenarios, which allow quantifying omitted, misspecified, and irrelevant-variable bias. The simulations are based on fully parametric mechanisms designed to replicate two datasets with different mortality and atmospheric patterns. Overall, minimum partial autocorrelation function approaches provide more stable results for high mortality counts and strong seasonal trends, whereas natural splines with fixed degrees of freedom perform better for low mortality counts and weak seasonal trends followed by the time-season-stratified CC model, which performs equally well in terms of bias but yields higher standard errors. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Model parameter estimation approach based on incremental analysis for lithium-ion batteries without using open circuit voltage

    NASA Astrophysics Data System (ADS)

    Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui

    2015-08-01

    To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.

  15. Multicentre external validation of IOTA prediction models and RMI by operators with varied training

    PubMed Central

    Sayasneh, A; Wynants, L; Preisler, J; Kaijser, J; Johnson, S; Stalder, C; Husicka, R; Abdallah, Y; Raslan, F; Drought, A; Smith, A A; Ghaem-Maghami, S; Epstein, E; Van Calster, B; Timmerman, D; Bourne, T

    2013-01-01

    Background: Correct characterisation of ovarian tumours is critical to optimise patient care. The purpose of this study is to evaluate the diagnostic performance of the International Ovarian Tumour Analysis (IOTA) logistic regression model (LR2), ultrasound Simple Rules (SR), the Risk of Malignancy Index (RMI) and subjective assessment (SA) for preoperative characterisation of adnexal masses, when ultrasonography is performed by examiners with different background training and experience. Methods: A 2-year prospective multicentre cross-sectional study. Thirty-five level II ultrasound examiners contributed in three UK hospitals. Transvaginal ultrasonography was performed using a standardised approach. The final outcome was the surgical findings and histological diagnosis. To characterise the adnexal masses, the six-variable prediction model (LR2) with a cutoff of 0.1, the RMI with cutoff of 200, ten SR (five rules for malignancy and five rules for benignity) and SA were applied. The area under the curves (AUCs) for performance of LR2 and RMI were calculated. Diagnostic performance measures for all models assessed were sensitivity, specificity, positive and negative likelihood ratios (LR+ and LR−), and the diagnostic odds ratio (DOR). Results: Nine-hundred and sixty-two women with adnexal masses underwent transvaginal ultrasonography, whereas 255 had surgery. Prevalence of malignancy was 29% (49 primary invasive epithelial ovarian cancers, 18 borderline ovarian tumours, and 7 metastatic tumours). The AUCs for LR2 and RMI for all masses were 0.94 (95% confidence interval (CI): 0.89–0.97) and 0.90 (95% CI: 0.83–0.94), respectively. In premenopausal women, LR2−RMI difference was 0.09 (95% CI: 0.03–0.15) compared with −0.02 (95% CI: −0.08 to 0.04) in postmenopausal women. For all masses, the DORs for LR2, RMI, SR+SA (using SA when SR inapplicable), SR+MA (assuming malignancy when SR inapplicable), and SA were 62 (95% CI: 27–142), 43 (95% CI: 19–97), 109 (95% CI: 44–274), 66 (95% CI: 27–158), and 70 (95% CI: 30–163), respectively. Conclusion: Overall, the test performance of IOTA prediction models and rules as well as the RMI was maintained in examiners with varying levels of training and experience. PMID:23674083

  16. An Overview of Quantitative Risk Assessment of Space Shuttle Propulsion Elements

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.

    1998-01-01

    Since the Space Shuttle Challenger accident in 1986, NASA has been working to incorporate quantitative risk assessment (QRA) in decisions concerning the Space Shuttle and other NASA projects. One current major NASA QRA study is the creation of a risk model for the overall Space Shuttle system. The model is intended to provide a tool to estimate Space Shuttle risk and to perform sensitivity analyses/trade studies, including the evaluation of upgrades. Marshall Space Flight Center (MSFC) is a part of the NASA team conducting the QRA study; MSFC responsibility involves modeling the propulsion elements of the Space Shuttle, namely: the External Tank (ET), the Solid Rocket Booster (SRB), the Reusable Solid Rocket Motor (RSRM), and the Space Shuttle Main Engine (SSME). This paper discusses the approach that MSFC has used to model its Space Shuttle elements, including insights obtained from this experience in modeling large scale, highly complex systems with a varying availability of success/failure data. Insights, which are applicable to any QRA study, pertain to organizing the modeling effort, obtaining customer buy-in, preparing documentation, and using varied modeling methods and data sources. Also provided is an overall evaluation of the study results, including the strengths and the limitations of the MSFC QRA approach and of qRA technology in general.

  17. Critical research issues in development of biomathematical models of fatigue and performance.

    PubMed

    Dinges, David F

    2004-03-01

    This article reviews the scientific research needed to ensure the continued development, validation, and operational transition of biomathematical models of fatigue and performance. These models originated from the need to ascertain the formal underlying relationships among sleep and circadian dynamics in the control of alertness and neurobehavioral performance capability. Priority should be given to research that further establishes their basic validity, including the accuracy of the core mathematical formulae and parameters that instantiate the interactions of sleep/wake and circadian processes. Since individuals can differ markedly and reliably in their responses to sleep loss and to countermeasures for it, models must incorporate estimates of these inter-individual differences, and research should identify predictors of them. To ensure models accurately predict recovery of function with sleep of varying durations, dose-response curves for recovery of performance as a function of prior sleep homeostatic load and the number of days of recovery are needed. It is also necessary to establish whether the accuracy of models is affected by using work/rest schedules as surrogates for sleep/wake inputs to models. Given the importance of light as both a circadian entraining agent and an alerting agent, research should determine the extent to which light input could incrementally improve model predictions of performance, especially in persons exposed to night work, jet lag, and prolonged work. Models seek to estimate behavioral capability and/or the relative risk of adverse events in a fatigued state. Research is needed on how best to scale and interpret metrics of behavioral capability, and incorporate factors that amplify or diminish the relationship between model predictions of performance and risk outcomes.

  18. Gas Flow in the Capillary of the Atmosphere-to-Vacuum Interface of Mass Spectrometers

    NASA Astrophysics Data System (ADS)

    Skoblin, Michael; Chudinov, Alexey; Soulimenkov, Ilia; Brusov, Vladimir; Kozlovskiy, Viacheslav

    2017-10-01

    Numerical simulations of a gas flow through a capillary being a part of mass spectrometer atmospheric interface were performed using a detailed laminar flow model. The simulated interface consisted of atmospheric and forevacuum volumes connected via a thin capillary. The pressure in the forevacuum volume where the gas was expanding after passing through the capillary was varied in the wide range from 10 to 900 mbar in order to study the volume flow rate as well as the other flow parameters as functions of the pressure drop between the atmospheric and forevacuum volumes. The capillary wall temperature was varied in the range from 24 to 150 °C. Numerical integration of the complete system of Navier-Stokes equations for a viscous compressible gas taking into account the heat transfer was performed using the standard gas dynamic simulation software package ANSYS CFX. The simulation results were compared with experimental measurements of gas flow parameters both performed using our experimental setup and taken from the literature. The simulated volume flow rates through the capillary differed no more than by 10% from the measured ones over the entire pressure and temperatures ranges. A conclusion was drawn that the detailed digital laminar model is able to quantitatively describe the measured gas flow rates through the capillaries under conditions considered. [Figure not available: see fulltext.

  19. Gas Flow in the Capillary of the Atmosphere-to-Vacuum Interface of Mass Spectrometers.

    PubMed

    Skoblin, Michael; Chudinov, Alexey; Soulimenkov, Ilia; Brusov, Vladimir; Kozlovskiy, Viacheslav

    2017-10-01

    Numerical simulations of a gas flow through a capillary being a part of mass spectrometer atmospheric interface were performed using a detailed laminar flow model. The simulated interface consisted of atmospheric and forevacuum volumes connected via a thin capillary. The pressure in the forevacuum volume where the gas was expanding after passing through the capillary was varied in the wide range from 10 to 900 mbar in order to study the volume flow rate as well as the other flow parameters as functions of the pressure drop between the atmospheric and forevacuum volumes. The capillary wall temperature was varied in the range from 24 to 150 °C. Numerical integration of the complete system of Navier-Stokes equations for a viscous compressible gas taking into account the heat transfer was performed using the standard gas dynamic simulation software package ANSYS CFX. The simulation results were compared with experimental measurements of gas flow parameters both performed using our experimental setup and taken from the literature. The simulated volume flow rates through the capillary differed no more than by 10% from the measured ones over the entire pressure and temperatures ranges. A conclusion was drawn that the detailed digital laminar model is able to quantitatively describe the measured gas flow rates through the capillaries under conditions considered. Graphical Abstract ᅟ.

  20. Long-Term Prognostic Validity of Talent Selections: Comparing National and Regional Coaches, Laypersons and Novices

    PubMed Central

    Schorer, Jörg; Rienhoff, Rebecca; Fischer, Lennart; Baker, Joseph

    2017-01-01

    In most sports, the development of elite athletes is a long-term process of talent identification and support. Typically, talent selection systems administer a multi-faceted strategy including national coach observations and varying physical and psychological tests when deciding who is chosen for talent development. The aim of this exploratory study was to evaluate the prognostic validity of talent selections by varying groups 10 years after they had been conducted. This study used a unique, multi-phased approach. Phase 1 involved players (n = 68) in 2001 completing a battery of general and sport-specific tests of handball ‘talent’ and performance. In Phase 2, national and regional coaches (n = 7) in 2001 who attended training camps identified the most talented players. In Phase 3, current novice and advanced handball players (n = 12 in each group) selected the most talented from short videos of matches played during the talent camp. Analyses compared predictions among all groups with a best model-fit derived from the motor tests. Results revealed little difference between regional and national coaches in the prediction of future performance and little difference in forecasting performance between novices and players. The best model-fit regression by the motor-tests outperformed all predictions. While several limitations are discussed, this study is a useful starting point for future investigations considering athlete selection decisions in talent identification in sport. PMID:28744238

  1. Long-Term Prognostic Validity of Talent Selections: Comparing National and Regional Coaches, Laypersons and Novices.

    PubMed

    Schorer, Jörg; Rienhoff, Rebecca; Fischer, Lennart; Baker, Joseph

    2017-01-01

    In most sports, the development of elite athletes is a long-term process of talent identification and support. Typically, talent selection systems administer a multi-faceted strategy including national coach observations and varying physical and psychological tests when deciding who is chosen for talent development. The aim of this exploratory study was to evaluate the prognostic validity of talent selections by varying groups 10 years after they had been conducted. This study used a unique, multi-phased approach. Phase 1 involved players ( n = 68) in 2001 completing a battery of general and sport-specific tests of handball 'talent' and performance. In Phase 2, national and regional coaches ( n = 7) in 2001 who attended training camps identified the most talented players. In Phase 3, current novice and advanced handball players ( n = 12 in each group) selected the most talented from short videos of matches played during the talent camp. Analyses compared predictions among all groups with a best model-fit derived from the motor tests. Results revealed little difference between regional and national coaches in the prediction of future performance and little difference in forecasting performance between novices and players. The best model-fit regression by the motor-tests outperformed all predictions. While several limitations are discussed, this study is a useful starting point for future investigations considering athlete selection decisions in talent identification in sport.

  2. On the use of musculoskeletal models to interpret motor control strategies from performance data

    NASA Astrophysics Data System (ADS)

    Cheng, Ernest J.; Loeb, Gerald E.

    2008-06-01

    The intrinsic viscoelastic properties of muscle are central to many theories of motor control. Much of the debate over these theories hinges on varying interpretations of these muscle properties. In the present study, we describe methods whereby a comprehensive musculoskeletal model can be used to make inferences about motor control strategies that would account for behavioral data. Muscle activity and kinematic data from a monkey were recorded while the animal performed a single degree-of-freedom pointing task in the presence of pseudo-random torque perturbations. The monkey's movements were simulated by a musculoskeletal model with accurate representations of musculotendon morphometry and contractile properties. The model was used to quantify the impedance of the limb while moving rapidly, the differential action of synergistic muscles, the relative contribution of reflexes to task performance and the completeness of recorded EMG signals. Current methods to address these issues in the absence of musculoskeletal models were compared with the methods used in the present study. We conclude that musculoskeletal models and kinetic analysis can improve the interpretation of kinematic and electrophysiological data, in some cases by illuminating shortcomings of the experimental methods or underlying assumptions that may otherwise escape notice.

  3. High performance HRM: NHS employee perspectives.

    PubMed

    Hyde, Paula; Sparrow, Paul; Boaden, Ruth; Harris, Claire

    2013-01-01

    The purpose of this paper is to examine National Health Service (NHS) employee perspectives of how high performance human resource (HR) practices contribute to their performance. The paper draws on an extensive qualitative study of the NHS. A novel two-part method was used; the first part used focus group data from managers to identify high-performance HR practices specific to the NHS. Employees then conducted a card-sort exercise where they were asked how or whether the practices related to each other and how each practice affected their work. In total, 11 high performance HR practices relevant to the NHS were identified. Also identified were four reactions to a range of HR practices, which the authors developed into a typology according to anticipated beneficiaries (personal gain, organisation gain, both gain and no-one gains). Employees were able to form their own patterns (mental models) of performance contribution for a range of HR practices (60 interviewees produced 91 groupings). These groupings indicated three bundles particular to the NHS (professional development, employee contribution and NHS deal). These mental models indicate employee perceptions about how health services are organised and delivered in the NHS and illustrate the extant mental models of health care workers. As health services are rearranged and financial pressures begin to bite, these mental models will affect employee reactions to changes both positively and negatively. The novel method allows for identification of mental models that explain how NHS workers understand service delivery. It also delineates the complex and varied relationships between HR practices and individual performance.

  4. Apical stress distribution on maxillary central incisor during various orthodontic tooth movements by varying cemental and two different periodontal ligament thicknesses: a FEM study.

    PubMed

    Vikram, N Raj; Senthil Kumar, K S; Nagachandran, K S; Hashir, Y Mohamed

    2012-01-01

    During fixed orthodontic therapy, when the stress levels in the periodontal ligament (PDL) exceedsan optimum level, it could lead to root resorption. To determine an apical stress incident on the maxillary central incisor during tooth movement with varying cemental and periodontal ligament thickness by Finite Element Method (FEM) modeling. A three dimensional finite element model of a maxillary central incisor along with enamel, dentin, cementum, PDL and alveolar bone was recreated using EZIDCOM and AUTOCAD software. ALTAIR Hyper mesh 7.0 version was used to create the Finite Element meshwork of the tooth. This virtual model was transferred to Finite Element Analysis software, ANSYS where different tooth movements were performed. Cemental thickness at the root apex was varied from 200 μm to 1000 μm in increments of 200 μm. PDL thickness was varied as 0.24 mm and 0.15 mm. Intrusive, Extrusive, Rotation and Tipping forces were delivered to determine an apical stress for each set of parameters. Results indicated that an apical stress induced in the cementum and PDL, increased with an increase in cementum and PDL thickness respectively. Apical stress induced in the cementum remained the same or decreased with an increase in the PDL thickness. Apical stress induced in the PDL decreased with an increase in the cementum thickness. The study concluded that the clinical delivery of an orthodontic forces will cause stress in the cementum and PDL. Hence, it is necessary to limit the orthodontic force to prevent root resorption.

  5. Trap configuration and spacing influences parameter estimates in spatial capture-recapture models

    USGS Publications Warehouse

    Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew

    2014-01-01

    An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.

  6. Assessing the evolution of primary healthcare organizations and their performance (2005-2010) in two regions of Québec province: Montréal and Montérégie

    PubMed Central

    2010-01-01

    Background The Canadian healthcare system is currently experiencing important organizational transformations through the reform of primary healthcare (PHC). These reforms vary in scope but share a common feature of proposing the transformation of PHC organizations by implementing new models of PHC organization. These models vary in their performance with respect to client affiliation, utilization of services, experience of care and perceived outcomes of care. Objectives In early 2005 we conducted a study in the two most populous regions of Quebec province (Montreal and Montérégie) which assessed the association between prevailing models of primary healthcare (PHC) and population-level experience of care. The goal of the present research project is to track the evolution of PHC organizational models and their relative performance through the reform process (from 2005 until 2010) and to assess factors at the organizational and contextual levels that are associated with the transformation of PHC organizations and their performance. Methods/Design This study will consist of three interrelated surveys, hierarchically nested. The first survey is a population-based survey of randomly-selected adults from two populous regions in the province of Quebec. This survey will assess the current affiliation of people with PHC organizations, their level of utilization of healthcare services, attributes of their experience of care, reception of preventive and curative services and perception of unmet needs for care. The second survey is an organizational survey of PHC organizations assessing aspects related to their vision, organizational structure, level of resources, and clinical practice characteristics. This information will serve to develop a taxonomy of organizations using a mixed methods approach of factorial analysis and principal component analysis. The third survey is an assessment of the organizational context in which PHC organizations are evolving. The five year prospective period will serve as a natural experiment to assess contextual and organizational factors (in 2005) associated with migration of PHC organizational models into new forms or models (in 2010) and assess the impact of this evolution on the performance of PHC. Discussion The results of this study will shed light on changes brought about in the organization of PHC and on factors associated with these changes. PMID:21122145

  7. Assessing the evolution of primary healthcare organizations and their performance (2005-2010) in two regions of Québec province: Montréal and Montérégie.

    PubMed

    Levesque, Jean-Frédéric; Pineault, Raynald; Provost, Sylvie; Tousignant, Pierre; Couture, Audrey; Da Silva, Roxane Borgès; Breton, Mylaine

    2010-12-01

    The Canadian healthcare system is currently experiencing important organizational transformations through the reform of primary healthcare (PHC). These reforms vary in scope but share a common feature of proposing the transformation of PHC organizations by implementing new models of PHC organization. These models vary in their performance with respect to client affiliation, utilization of services, experience of care and perceived outcomes of care. In early 2005 we conducted a study in the two most populous regions of Quebec province (Montreal and Montérégie) which assessed the association between prevailing models of primary healthcare (PHC) and population-level experience of care. The goal of the present research project is to track the evolution of PHC organizational models and their relative performance through the reform process (from 2005 until 2010) and to assess factors at the organizational and contextual levels that are associated with the transformation of PHC organizations and their performance. This study will consist of three interrelated surveys, hierarchically nested. The first survey is a population-based survey of randomly-selected adults from two populous regions in the province of Quebec. This survey will assess the current affiliation of people with PHC organizations, their level of utilization of healthcare services, attributes of their experience of care, reception of preventive and curative services and perception of unmet needs for care. The second survey is an organizational survey of PHC organizations assessing aspects related to their vision, organizational structure, level of resources, and clinical practice characteristics. This information will serve to develop a taxonomy of organizations using a mixed methods approach of factorial analysis and principal component analysis. The third survey is an assessment of the organizational context in which PHC organizations are evolving. The five year prospective period will serve as a natural experiment to assess contextual and organizational factors (in 2005) associated with migration of PHC organizational models into new forms or models (in 2010) and assess the impact of this evolution on the performance of PHC. The results of this study will shed light on changes brought about in the organization of PHC and on factors associated with these changes.

  8. A comparative study of artificial neural network, adaptive neuro fuzzy inference system and support vector machine for forecasting river flow in the semiarid mountain region

    NASA Astrophysics Data System (ADS)

    He, Zhibin; Wen, Xiaohu; Liu, Hu; Du, Jun

    2014-02-01

    Data driven models are very useful for river flow forecasting when the underlying physical relationships are not fully understand, but it is not clear whether these data driven models still have a good performance in the small river basin of semiarid mountain regions where have complicated topography. In this study, the potential of three different data driven methods, artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for forecasting river flow in the semiarid mountain region, northwestern China. The models analyzed different combinations of antecedent river flow values and the appropriate input vector has been selected based on the analysis of residuals. The performance of the ANN, ANFIS and SVM models in training and validation sets are compared with the observed data. The model which consists of three antecedent values of flow has been selected as the best fit model for river flow forecasting. To get more accurate evaluation of the results of ANN, ANFIS and SVM models, the four quantitative standard statistical performance evaluation measures, the coefficient of correlation (R), root mean squared error (RMSE), Nash-Sutcliffe efficiency coefficient (NS) and mean absolute relative error (MARE), were employed to evaluate the performances of various models developed. The results indicate that the performance obtained by ANN, ANFIS and SVM in terms of different evaluation criteria during the training and validation period does not vary substantially; the performance of the ANN, ANFIS and SVM models in river flow forecasting was satisfactory. A detailed comparison of the overall performance indicated that the SVM model performed better than ANN and ANFIS in river flow forecasting for the validation data sets. The results also suggest that ANN, ANFIS and SVM method can be successfully applied to establish river flow with complicated topography forecasting models in the semiarid mountain regions.

  9. Prediction on carbon dioxide emissions based on fuzzy rules

    NASA Astrophysics Data System (ADS)

    Pauzi, Herrini; Abdullah, Lazim

    2014-06-01

    There are several ways to predict air quality, varying from simple regression to models based on artificial intelligence. Most of the conventional methods are not sufficiently able to provide good forecasting performances due to the problems with non-linearity uncertainty and complexity of the data. Artificial intelligence techniques are successfully used in modeling air quality in order to cope with the problems. This paper describes fuzzy inference system (FIS) to predict CO2 emissions in Malaysia. Furthermore, adaptive neuro-fuzzy inference system (ANFIS) is used to compare the prediction performance. Data of five variables: energy use, gross domestic product per capita, population density, combustible renewable and waste and CO2 intensity are employed in this comparative study. The results from the two model proposed are compared and it is clearly shown that the ANFIS outperforms FIS in CO2 prediction.

  10. Development of a scaled-down aerobic fermentation model for scale-up in recombinant protein vaccine manufacturing.

    PubMed

    Farrell, Patrick; Sun, Jacob; Gao, Meg; Sun, Hong; Pattara, Ben; Zeiser, Arno; D'Amore, Tony

    2012-08-17

    A simple approach to the development of an aerobic scaled-down fermentation model is presented to obtain more consistent process performance during the scale-up of recombinant protein manufacture. Using a constant volumetric oxygen mass transfer coefficient (k(L)a) for the criterion of a scale-down process, the scaled-down model can be "tuned" to match the k(L)a of any larger-scale target by varying the impeller rotational speed. This approach is demonstrated for a protein vaccine candidate expressed in recombinant Escherichia coli, where process performance is shown to be consistent among 2-L, 20-L, and 200-L scales. An empirical correlation for k(L)a has also been employed to extrapolate to larger manufacturing scales. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. A discrete time-varying internal model-based approach for high precision tracking of a multi-axis servo gantry.

    PubMed

    Zhang, Zhen; Yan, Peng; Jiang, Huan; Ye, Peiqing

    2014-09-01

    In this paper, we consider the discrete time-varying internal model-based control design for high precision tracking of complicated reference trajectories generated by time-varying systems. Based on a novel parallel time-varying internal model structure, asymptotic tracking conditions for the design of internal model units are developed, and a low order robust time-varying stabilizer is further synthesized. In a discrete time setting, the high precision tracking control architecture is deployed on a Voice Coil Motor (VCM) actuated servo gantry system, where numerical simulations and real time experimental results are provided, achieving the tracking errors around 3.5‰ for frequency-varying signals. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Evaluation of kinematics and injuries to restrained occupants in far-side crashes using full-scale vehicle and human body models.

    PubMed

    Arun, Mike W J; Umale, Sagar; Humm, John R; Yoganandan, Narayan; Hadagali, Prasanaah; Pintar, Frank A

    2016-09-01

    The objective of the current study was to perform a parametric study with different impact objects, impact locations, and impact speeds by analyzing occupant kinematics and injury estimations using a whole-vehicle and whole-body finite element-human body model (FE-HBM). To confirm the HBM responses, the biofidelity of the model was validated using data from postmortem human surrogate (PMHS) sled tests. The biofidelity of the model was validated using data from sled experiments and correlational analysis (CORA). Full-scale simulations were performed using a restrained Global Human Body Model Consortium (GHBMC) model seated on a 2001 Ford Taurus model using a far-side lateral impact condition. The driver seat was placed in the center position to represent a nominal initial impact condition. A 3-point seat belt with pretensioner and retractor was used to restrain the GHBMC model. A parametric study was performed using 12 simulations by varying impact locations, impacting object, and impact speed using the full-scale models. In all 12 simulations, the principal direction of force (PDOF) was selected as 90°. The impacting objects were a 10-in.-diameter rigid vertical pole and a movable deformable barrier. The impact location of the pole was at the C-pillar in the first case, at the B-pillar in the second case, and, finally, at the A-pillar in the third case. The vehicle and the GHBMC models were defined an initial velocity of 35 km/h (high speed) and 15 km/h (low speed). Excursion of the head center of gravity (CG), T6, and pelvis were measured from the simulations. In addition, injury risk estimations were performed on head, rib cage, lungs, kidneys, liver, spleen, and pelvis. The average CORA rating was 0.7. The shoulder belt slipped in B- and C-pillar impacts but somewhat engaged in the A-pillar case. In the B-pillar case, the head contacted the intruding struck-side structures, indicating higher risk of injury. Occupant kinematics depended on interaction with restraints and internal structures-especially the passenger seat. Risk analysis indicated that the head had the highest risk of sustaining an injury in the B-pillar case compared to the other 2 cases. Higher lap belt load (3.4 kN) may correspond to the Abbreviated Injury Scale (AIS) 2 pelvic injury observed in the B-pillar case. Risk of injury to other soft anatomical structures varied with impact configuration and restraint interaction. The average CORA rating was 0.7. In general, the results indicated that the high-speed impacts against the pole resulted in severe injuries, higher excursions followed by low-speed pole, high-speed moving deformable barrier (MDB), and low-speed MDB impacts. The vehicle and occupant kinematics varied with different impact setups and the latter kinematics were likely influenced by restraint effectiveness. Increased restraint engagement increased the injury risk to the corresponding anatomic structure, whereas ineffective restraint engagement increased the occupant excursion, resulting in a direct impact to the struck-side interior structures.

  13. Progress Toward an Integration of Process-Structure-Property-Performance Models for "Three-Dimensional (3-D) Printing" of Titanium Alloys

    NASA Astrophysics Data System (ADS)

    Collins, P. C.; Haden, C. V.; Ghamarian, I.; Hayes, B. J.; Ales, T.; Penso, G.; Dixit, V.; Harlow, G.

    2014-07-01

    Electron beam direct manufacturing, synonymously known as electron beam additive manufacturing, along with other additive "3-D printing" manufacturing processes, are receiving widespread attention as a means of producing net-shape (or near-net-shape) components, owing to potential manufacturing benefits. Yet, materials scientists know that differences in manufacturing processes often significantly influence the microstructure of even widely accepted materials and, thus, impact the properties and performance of a material in service. It is important to accelerate the understanding of the processing-structure-property relationship of materials being produced via these novel approaches in a framework that considers the performance in a statistically rigorous way. This article describes the development of a process model, the assessment of key microstructural features to be incorporated into a microstructure simulation model, a novel approach to extract a constitutive equation to predict tensile properties in Ti-6Al-4V (Ti-64), and a probabilistic approach to measure the fidelity of the property model against real data. This integrated approach will provide designers a tool to vary process parameters and understand the influence on performance, enabling design and optimization for these highly visible manufacturing approaches.

  14. Modeling the effect of channel number and interaction on consonant recognition in a cochlear implant peak-picking strategy.

    PubMed

    Verschuur, Carl

    2009-03-01

    Difficulties in speech recognition experienced by cochlear implant users may be attributed both to information loss caused by signal processing and to information loss associated with the interface between the electrode array and auditory nervous system, including cross-channel interaction. The objective of the work reported here was to attempt to partial out the relative contribution of these different factors to consonant recognition. This was achieved by comparing patterns of consonant feature recognition as a function of channel number and presence/absence of background noise in users of the Nucleus 24 device with normal hearing subjects listening to acoustic models that mimicked processing of that device. Additionally, in the acoustic model experiment, a simulation of cross-channel spread of excitation, or "channel interaction," was varied. Results showed that acoustic model experiments were highly correlated with patterns of performance in better-performing cochlear implant users. Deficits to consonant recognition in this subgroup could be attributed to cochlear implant processing, whereas channel interaction played a much smaller role in determining performance errors. The study also showed that large changes to channel number in the Advanced Combination Encoder signal processing strategy led to no substantial changes in performance.

  15. Design and implementation of ergonomic performance measurement system at a steel plant in India.

    PubMed

    Ray, Pradip Kumar; Tewari, V K

    2012-01-01

    Management of Tata Steel, the largest steel making company of India in the private sector, felt the need to develop a framework to determine the levels of ergonomic performance at its different workplaces. The objectives of the study are manifold: to identify and characterize the ergonomic variables for a given worksystem with regard to work efficiency, operator safety, and working conditions, to design a comprehensive Ergonomic Performance Indicator (EPI) for quantitative determination of the ergonomic status and maturity of a given worksystem. The study team of IIT Kharagpur consists of three faculty members and the management of Tata Steel formed a team of eleven members for implementation of EPI model. In order to design and develop the EPI model with total participation and understanding of the concerned personnel of Tata Steel, a three-phase action plan for the project was prepared. The project consists of three phases: preparation and data collection, detailed structuring and validation of EPI model. Identification of ergonomic performance factors, development of interaction matrix, design of assessment tool, and testing and validation of assessment tool (EPI) in varied situations are the major steps in these phases. The case study discusses in detail the EPI model and its applications.

  16. Time-response shaping using output to input saturation transformation

    NASA Astrophysics Data System (ADS)

    Chambon, E.; Burlion, L.; Apkarian, P.

    2018-03-01

    For linear systems, the control law design is often performed so that the resulting closed loop meets specific frequency-domain requirements. However, in many cases, it may be observed that the obtained controller does not enforce time-domain requirements amongst which the objective of keeping a scalar output variable in a given interval. In this article, a transformation is proposed to convert prescribed bounds on an output variable into time-varying saturations on the synthesised linear scalar control law. This transformation uses some well-chosen time-varying coefficients so that the resulting time-varying saturation bounds do not overlap in the presence of disturbances. Using an anti-windup approach, it is obtained that the origin of the resulting closed loop is globally asymptotically stable and that the constrained output variable satisfies the time-domain constraints in the presence of an unknown finite-energy-bounded disturbance. An application to a linear ball and beam model is presented.

  17. Induction motor speed control using varied duty cycle terminal voltage via PI controller

    NASA Astrophysics Data System (ADS)

    Azwin, A.; Ahmed, S.

    2018-03-01

    This paper deals with the PI speed controller for the three-phase induction motor using PWM technique. The PWM generated signal is utilized for voltage source inverter with an optimal duty cycle on a simplified induction motor model. A control algorithm for generating PWM control signal is developed. Obtained results shows that the steady state error and overshoot of the developed system is in the limit under different speed and load condition. The robustness of the control performance would be potential for induction motor performance improvement.

  18. A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine

    PubMed Central

    Ramakrishnan, Sridhar; Wesensten, Nancy J.; Kamimori, Gary H.; Moon, James E.; Balkin, Thomas J.; Reifman, Jaques

    2016-01-01

    Study Objectives: Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. Methods: We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). Results: The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% < error < 27%), yielding greater accuracy for mild and moderate sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. Conclusions: The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. Citation: Ramakrishnan S, Wesensten NJ, Kamimori GH, Moon JE, Balkin TJ, Reifman J. A unified model of performance for predicting the effects of sleep and caffeine. SLEEP 2016;39(10):1827–1841. PMID:27397562

  19. The Structure of Morpho-Functional Conditions Determining the Level of Sports Performance of Young Badminton Players

    PubMed Central

    Jaworski, Janusz; Żak, Michał

    2015-01-01

    The aim of the study was to determine the structure of morpho-functional models that determine the level of sports performance in three consecutive stages of training of young badminton players. In the course of the study, 3 groups of young badminton players were examined: 40 preadolescents aged 11–13, 32 adolescents aged 14–16, and 24 adolescents aged 17–19. The scope of the study involved basic anthropometric measurements, computer tests analysing motor coordination abilities, motor skills encompassing speed, muscular power and strength, and cardiorespiratory endurance. Results of the study indicate that the structure of morpho-functional models varies at different stages of sports training. Sets of variables determining sports performance create characteristic complexes of variables that do not constitute permanent models. The dominance of somatic features and coordination abilities in the early stages of badminton training changes for the benefit of speed and strength abilities. PMID:26557205

  20. Visual-search model observer for assessing mass detection in CT

    NASA Astrophysics Data System (ADS)

    Karbaschi, Zohreh; Gifford, Howard C.

    2017-03-01

    Our aim is to devise model observers (MOs) to evaluate acquisition protocols in medical imaging. To optimize protocols for human observers, an MO must reliably interpret images containing quantum and anatomical noise under aliasing conditions. In this study of sampling parameters for simulated lung CT, the lesion-detection performance of human observers was compared with that of visual-search (VS) observers, a channelized nonprewhitening (CNPW) observer, and a channelized Hoteling (CH) observer. Scans of a mathematical torso phantom modeled single-slice parallel-hole CT with varying numbers of detector pixels and angular projections. Circular lung lesions had a fixed radius. Twodimensional FBP reconstructions were performed. A localization ROC study was conducted with the VS, CNPW and human observers, while the CH observer was applied in a location-known ROC study. Changing the sampling parameters had negligible effect on the CNPW and CH observers, whereas several VS observers demonstrated a sensitivity to sampling artifacts that was in agreement with how the humans performed.

  1. Mixture model normalization for non-targeted gas chromatography/mass spectrometry metabolomics data.

    PubMed

    Reisetter, Anna C; Muehlbauer, Michael J; Bain, James R; Nodzenski, Michael; Stevens, Robert D; Ilkayeva, Olga; Metzger, Boyd E; Newgard, Christopher B; Lowe, William L; Scholtens, Denise M

    2017-02-02

    Metabolomics offers a unique integrative perspective for health research, reflecting genetic and environmental contributions to disease-related phenotypes. Identifying robust associations in population-based or large-scale clinical studies demands large numbers of subjects and therefore sample batching for gas-chromatography/mass spectrometry (GC/MS) non-targeted assays. When run over weeks or months, technical noise due to batch and run-order threatens data interpretability. Application of existing normalization methods to metabolomics is challenged by unsatisfied modeling assumptions and, notably, failure to address batch-specific truncation of low abundance compounds. To curtail technical noise and make GC/MS metabolomics data amenable to analyses describing biologically relevant variability, we propose mixture model normalization (mixnorm) that accommodates truncated data and estimates per-metabolite batch and run-order effects using quality control samples. Mixnorm outperforms other approaches across many metrics, including improved correlation of non-targeted and targeted measurements and superior performance when metabolite detectability varies according to batch. For some metrics, particularly when truncation is less frequent for a metabolite, mean centering and median scaling demonstrate comparable performance to mixnorm. When quality control samples are systematically included in batches, mixnorm is uniquely suited to normalizing non-targeted GC/MS metabolomics data due to explicit accommodation of batch effects, run order and varying thresholds of detectability. Especially in large-scale studies, normalization is crucial for drawing accurate conclusions from non-targeted GC/MS metabolomics data.

  2. Performance evaluation of air quality models for predicting PM10 and PM2.5 concentrations at urban traffic intersection during winter period.

    PubMed

    Gokhale, Sharad; Raokhande, Namita

    2008-05-01

    There are several models that can be used to evaluate roadside air quality. The comparison of the operational performance of different models pertinent to local conditions is desirable so that the model that performs best can be identified. Three air quality models, namely the 'modified General Finite Line Source Model' (M-GFLSM) of particulates, the 'California Line Source' (CALINE3) model, and the 'California Line Source for Queuing & Hot Spot Calculations' (CAL3QHC) model have been identified for evaluating the air quality at one of the busiest traffic intersections in the city of Guwahati. These models have been evaluated statistically with the vehicle-derived airborne particulate mass emissions in two sizes, i.e. PM10 and PM2.5, the prevailing meteorology and the temporal distribution of the measured daily average PM10 and PM2.5 concentrations in wintertime. The study has shown that the CAL3QHC model would make better predictions compared to other models for varied meteorology and traffic conditions. The detailed study reveals that the agreements between the measured and the modeled PM10 and PM2.5 concentrations have been reasonably good for CALINE3 and CAL3QHC models. Further detailed analysis shows that the CAL3QHC model performed well compared to the CALINE3. The monthly performance measures have also led to the similar results. These two models have also outperformed for a class of wind speed velocities except for low winds (<1 m s(-1)), for which, the M-GFLSM model has shown the tendency of better performance for PM10. Nevertheless, the CAL3QHC model has outperformed for both the particulate sizes and for all the wind classes, which therefore can be optional for air quality assessment at urban traffic intersections.

  3. The Threshold Bias Model: A Mathematical Model for the Nomothetic Approach of Suicide

    PubMed Central

    Folly, Walter Sydney Dutra

    2011-01-01

    Background Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. Methodology/Principal Findings A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. Conclusions/Significance The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health. PMID:21909431

  4. Active subspace uncertainty quantification for a polydomain ferroelectric phase-field model

    NASA Astrophysics Data System (ADS)

    Leon, Lider S.; Smith, Ralph C.; Miles, Paul; Oates, William S.

    2018-03-01

    Quantum-informed ferroelectric phase field models capable of predicting material behavior, are necessary for facilitating the development and production of many adaptive structures and intelligent systems. Uncertainty is present in these models, given the quantum scale at which calculations take place. A necessary analysis is to determine how the uncertainty in the response can be attributed to the uncertainty in the model inputs or parameters. A second analysis is to identify active subspaces within the original parameter space, which quantify directions in which the model response varies most dominantly, thus reducing sampling effort and computational cost. In this investigation, we identify an active subspace for a poly-domain ferroelectric phase-field model. Using the active variables as our independent variables, we then construct a surrogate model and perform Bayesian inference. Once we quantify the uncertainties in the active variables, we obtain uncertainties for the original parameters via an inverse mapping. The analysis provides insight into how active subspace methodologies can be used to reduce computational power needed to perform Bayesian inference on model parameters informed by experimental or simulated data.

  5. The threshold bias model: a mathematical model for the nomothetic approach of suicide.

    PubMed

    Folly, Walter Sydney Dutra

    2011-01-01

    Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health.

  6. [Evaluating the performance of species distribution models Biomod2 and MaxEnt using the giant panda distribution data].

    PubMed

    Luo, Mei; Wang, Hao; Lyu, Zhi

    2017-12-01

    Species distribution models (SDMs) are widely used by researchers and conservationists. Results of prediction from different models vary significantly, which makes users feel difficult in selecting models. In this study, we evaluated the performance of two commonly used SDMs, the Biomod2 and Maximum Entropy (MaxEnt), with real presence/absence data of giant panda, and used three indicators, i.e., area under the ROC curve (AUC), true skill statistics (TSS), and Cohen's Kappa, to evaluate the accuracy of the two model predictions. The results showed that both models could produce accurate predictions with adequate occurrence inputs and simulation repeats. Comparedto MaxEnt, Biomod2 made more accurate prediction, especially when occurrence inputs were few. However, Biomod2 was more difficult to be applied, required longer running time, and had less data processing capability. To choose the right models, users should refer to the error requirements of their objectives. MaxEnt should be considered if the error requirement was clear and both models could achieve, otherwise, we recommend the use of Biomod2 as much as possible.

  7. Numerical assessment of the influence of different joint hysteretic models over the seismic behaviour of Moment Resisting Steel Frames

    NASA Astrophysics Data System (ADS)

    Giordano, V.; Chisari, C.; Rizzano, G.; Latour, M.

    2017-10-01

    The main aim of this work is to understand how the prediction of the seismic performance of moment-resisting (MR) steel frames depends on the modelling of their dissipative zones when the structure geometry (number of stories and bays) and seismic excitation source vary. In particular, a parametric analysis involving 4 frames was carried out, and, for each one, the full-strength beam-to-column connections were modelled according to 4 numerical approaches with different degrees of sophistication (Smooth Hysteretic Model, Bouc-Wen, Hysteretic and simple Elastic-Plastic models). Subsequently, Incremental Dynamic Analyses (IDA) were performed by considering two different earthquakes (Spitak and Kobe). The preliminary results collected so far pointed out that the influence of the joint modelling on the overall frame response is negligible up to interstorey drift ratio values equal to those conservatively assumed by the codes to define conventional collapse (0.03 rad). Conversely, if more realistic ultimate interstorey drift values are considered for the q-factor evaluation, the influence of joint modelling can be significant, and thus may require accurate modelling of its cyclic behavior.

  8. A new technique for thermodynamic engine modeling

    NASA Astrophysics Data System (ADS)

    Matthews, R. D.; Peters, J. E.; Beckel, S. A.; Shizhi, M.

    1983-12-01

    Reference is made to the equations given by Matthews (1983) for piston engine performance, which show that this performance depends on four fundamental engine efficiencies (combustion, thermodynamic cycle or indicated thermal, volumetric, and mechanical) as well as on engine operation and design parameters. This set of equations is seen to suggest a different technique for engine modeling; that is, that each efficiency should be modeled individually and the efficiency submodels then combined to obtain an overall engine model. A simple method for predicting the combustion efficiency of piston engines is therefore required. Various methods are proposed here and compared with experimental results. These combustion efficiency models are then combined with various models for the volumetric, mechanical, and indicated thermal efficiencies to yield three different engine models of varying degrees of sophistication. Comparisons are then made of the predictions of the resulting engine models with experimental data. It is found that combustion efficiency is almost independent of load, speed, and compression ratio and is not strongly dependent on fuel type, at least so long as the hydrogen-to-carbon ratio is reasonably close to that for isooctane.

  9. Artificial neural networks in Space Station optimal attitude control

    NASA Astrophysics Data System (ADS)

    Kumar, Renjith R.; Seywald, Hans; Deshpande, Samir M.; Rahman, Zia

    1992-08-01

    Innovative techniques of using 'Artificial Neural Networks' (ANN) for improving the performance of the pitch axis attitude control system of Space Station Freedom using Control Moment Gyros (CMGs) are investigated. The first technique uses a feedforward ANN with multilayer perceptrons to obtain an on-line controller which improves the performance of the control system via a model following approach. The second techique uses a single layer feedforward ANN with a modified back propagation scheme to estimate the internal plant variations and the external disturbances separately. These estimates are then used to solve two differential Riccati equations to obtain time varying gains which improve the control system performance in successive orbits.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allan, Benjamin A.

    We report on the use and design of a portable, extensible performance data collection tool motivated by modeling needs of the high performance computing systems co-design com- munity. The lightweight performance data collectors with Eiger support is intended to be a tailorable tool, not a shrink-wrapped library product, as pro ling needs vary widely. A single code markup scheme is reported which, based on compilation ags, can send perfor- mance data from parallel applications to CSV les, to an Eiger mysql database, or (in a non-database environment) to at les for later merging and loading on a host with mysqlmore » available. The tool supports C, C++, and Fortran applications.« less

  11. Linking in situ LAI and fine resolution remote sensing data to map reference LAI over cropland and grassland using geostatistical regression method

    NASA Astrophysics Data System (ADS)

    He, Yaqian; Bo, Yanchen; Chai, Leilei; Liu, Xiaolong; Li, Aihua

    2016-08-01

    Leaf Area Index (LAI) is an important parameter of vegetation structure. A number of moderate resolution LAI products have been produced in urgent need of large scale vegetation monitoring. High resolution LAI reference maps are necessary to validate these LAI products. This study used a geostatistical regression (GR) method to estimate LAI reference maps by linking in situ LAI and Landsat TM/ETM+ and SPOT-HRV data over two cropland and two grassland sites. To explore the discrepancies of employing different vegetation indices (VIs) on estimating LAI reference maps, this study established the GR models for different VIs, including difference vegetation index (DVI), normalized difference vegetation index (NDVI), and ratio vegetation index (RVI). To further assess the performance of the GR model, the results from the GR and Reduced Major Axis (RMA) models were compared. The results show that the performance of the GR model varies between the cropland and grassland sites. At the cropland sites, the GR model based on DVI provides the best estimation, while at the grassland sites, the GR model based on DVI performs poorly. Compared to the RMA model, the GR model improves the accuracy of reference LAI maps in terms of root mean square errors (RMSE) and bias.

  12. Phase and speed synchronization control of four eccentric rotors driven by induction motors in a linear vibratory feeder with unknown time-varying load torques using adaptive sliding mode control algorithm

    NASA Astrophysics Data System (ADS)

    Kong, Xiangxi; Zhang, Xueliang; Chen, Xiaozhe; Wen, Bangchun; Wang, Bo

    2016-05-01

    In this paper, phase and speed synchronization control of four eccentric rotors (ERs) driven by induction motors in a linear vibratory feeder with unknown time-varying load torques is studied. Firstly, the electromechanical coupling model of the linear vibratory feeder is established by associating induction motor's model with the dynamic model of the system, which is a typical under actuated model. According to the characteristics of the linear vibratory feeder, the complex control problem of the under actuated electromechanical coupling model converts to phase and speed synchronization control of four ERs. In order to keep the four ERs operating synchronously with zero phase differences, phase and speed synchronization controllers are designed by employing adaptive sliding mode control (ASMC) algorithm via a modified master-slave structure. The stability of the controllers is proved by Lyapunov stability theorem. The proposed controllers are verified by simulation via Matlab/Simulink program and compared with the conventional sliding mode control (SMC) algorithm. The results show the proposed controllers can reject the time-varying load torques effectively and four ERs can operate synchronously with zero phase differences. Moreover, the control performance is better than the conventional SMC algorithm and the chattering phenomenon is attenuated. Furthermore, the effects of reference speed and parametric perturbations are discussed to show the strong robustness of the proposed controllers. Finally, experiments on a simple vibratory test bench are operated by using the proposed controllers and without control, respectively, to validate the effectiveness of the proposed controllers further.

  13. Thermal modeling of a cryogenic turbopump for space shuttle applications.

    NASA Technical Reports Server (NTRS)

    Knowles, P. J.

    1971-01-01

    Thermal modeling of a cryogenic pump and a hot-gas turbine in a turbopump assembly proposed for the Space Shuttle is described in this paper. A model, developed by identifying the heat-transfer regimes and incorporating their dependencies into a turbopump system model, included heat transfer for two-phase cryogen, hot-gas (200 R) impingement on turbine blades, gas impingement on rotating disks and parallel plate fluid flow. The ?thermal analyzer' program employed to develop this model was the TRW Systems Improved Numerical Differencing Analyzer (SINDA). This program uses finite differencing with lumped parameter representation for each node. Also discussed are model development, simulations of turbopump startup/shutdown operations, and the effects of varying turbopump parameters on the thermal performance.

  14. Microscopic pressure-cooker model for studying molecules in confinement

    NASA Astrophysics Data System (ADS)

    Santamaria, Ruben; Adamowicz, Ludwik; Rosas-Acevedo, Hortensia

    2015-04-01

    A model for a system of a finite number of molecules in confinement is presented and expressions for determining the temperature, pressure, and volume of the system are derived. The present model is a generalisation of the Zwanzig-Langevin model because it includes pressure effects in the system. It also has general validity, preserves the ergodic hypothesis, and provides a formal framework for previous studies of hydrogen clusters in confinement. The application of the model is illustrated by an investigation of a set of prebiotic compounds exposed to varying pressure and temperature. The simulations performed within the model involve the use of a combination of molecular dynamics and density functional theory methods implemented on a computer system with a mixed CPU-GPU architecture.

  15. Cost and schedule estimation study report

    NASA Technical Reports Server (NTRS)

    Condon, Steve; Regardie, Myrna; Stark, Mike; Waligora, Sharon

    1993-01-01

    This report describes the analysis performed and the findings of a study of the software development cost and schedule estimation models used by the Flight Dynamics Division (FDD), Goddard Space Flight Center. The study analyzes typical FDD projects, focusing primarily on those developed since 1982. The study reconfirms the standard SEL effort estimation model that is based on size adjusted for reuse; however, guidelines for the productivity and growth parameters in the baseline effort model have been updated. The study also produced a schedule prediction model based on empirical data that varies depending on application type. Models for the distribution of effort and schedule by life-cycle phase are also presented. Finally, this report explains how to use these models to plan SEL projects.

  16. Experimentally Modeling Black and White Hole Event Horizons via Fluid Flow

    NASA Astrophysics Data System (ADS)

    Manheim, Marc E.; Lindner, John F.; Manz, Niklas

    We will present a scaled down experiment that hydrodynamically models the interaction between electromagnetic waves and black/white holes. It has been mathematically proven that gravity waves in water can behave analogously to electromagnetic waves traveling through spacetime. In this experiment, gravity waves will be generated in a water tank and propagate in a direction opposed to a flow of varying rate. We observe a noticeable change in the wave's spreading behavior as it travels through the simulated horizon with decreased wave speeds up to standing waves, depending on the opposite flow rate. Such an experiment has already been performed in a 97.2 cubic meter tank. We reduced the size significantly to be able to perform the experiment under normal lab conditions.

  17. Hormone Purification by Isoelectric Focusing

    NASA Technical Reports Server (NTRS)

    Bier, M.

    1985-01-01

    Various ground-based research approaches are being applied to a more definitive evaluation of the natures and degrees of electroosmosis effects on the separation capabilities of the Isoelectric Focusing (IEF) process. A primary instrumental system for this work involves rotationally stabilized, horizontal electrophoretic columns specially adapted for the IEF process. Representative adaptations include segmentation, baffles/screens, and surface coatings. Comparative performance and development testing are pursued against the type of column or cell established as an engineering model. Previously developed computer simulation capabilities are used to predict low-gravity behavior patterns and performance for IEF apparatus geometries of direct project interest. Three existing mathematical models plus potential new routines for particular aspects of simulating instrument fluid patterns with varied wall electroosmosis influences are being exercised.

  18. Ducted turbine theory with right angled ducts

    NASA Astrophysics Data System (ADS)

    McLaren-Gow, S.; Jamieson, P.; Graham, J. M. R.

    2014-06-01

    This paper describes the use of an inviscid approach to model a ducted turbine - also known as a diffuser augmented turbine - and a comparison of results with a particular one-dimensional theory. The aim of the investigation was to gain a better understanding of the relationship between a real duct and the ideal diffuser, which is a concept that is developed in the theory. A range of right angled ducts, which have a rim for a 90° exit angle, were modelled. As a result, the performance of right angled ducts has been characterised in inviscid flow. It was concluded that right angled ducts cannot match the performance of their associated ideal diffuser and that the optimum rotor loading for these turbines varies with the duct dimensions.

  19. Experimental Results from a Flat Plate, Turbulent Boundary Layer Modified for the Purpose of Drag Reduction

    NASA Astrophysics Data System (ADS)

    Elbing, Brian R.

    2006-11-01

    Recent experiments on a flat plate, turbulent boundary layer at high Reynolds numbers (>10^7) were performed to investigate various methods of reducing skin friction drag. The methods used involved injecting either air or a polymer solution into the boundary layer through a slot injector. Two slot injectors were mounted on the model with one located 1.4 meters downstream of the nose and the second located 3.75 meters downstream. This allowed for some synergetic experiments to be performed by varying the injections from each slot and comparing the skin friction along the plate. Skin friction measurements were made with 6 shear stress sensors flush mounted along the stream-wise direction of the model.

  20. Is First-Order Vector Autoregressive Model Optimal for fMRI Data?

    PubMed

    Ting, Chee-Ming; Seghouane, Abd-Krim; Khalid, Muhammad Usman; Salleh, Sh-Hussain

    2015-09-01

    We consider the problem of selecting the optimal orders of vector autoregressive (VAR) models for fMRI data. Many previous studies used model order of one and ignored that it may vary considerably across data sets depending on different data dimensions, subjects, tasks, and experimental designs. In addition, the classical information criteria (IC) used (e.g., the Akaike IC (AIC)) are biased and inappropriate for the high-dimensional fMRI data typically with a small sample size. We examine the mixed results on the optimal VAR orders for fMRI, especially the validity of the order-one hypothesis, by a comprehensive evaluation using different model selection criteria over three typical data types--a resting state, an event-related design, and a block design data set--with varying time series dimensions obtained from distinct functional brain networks. We use a more balanced criterion, Kullback's IC (KIC) based on Kullback's symmetric divergence combining two directed divergences. We also consider the bias-corrected versions (AICc and KICc) to improve VAR model selection in small samples. Simulation results show better small-sample selection performance of the proposed criteria over the classical ones. Both bias-corrected ICs provide more accurate and consistent model order choices than their biased counterparts, which suffer from overfitting, with KICc performing the best. Results on real data show that orders greater than one were selected by all criteria across all data sets for the small to moderate dimensions, particularly from small, specific networks such as the resting-state default mode network and the task-related motor networks, whereas low orders close to one but not necessarily one were chosen for the large dimensions of full-brain networks.

  1. Two-vehicle injury severity models based on integration of pavement management and traffic engineering factors.

    PubMed

    Jiang, Ximiao; Huang, Baoshan; Yan, Xuedong; Zaretzki, Russell L; Richards, Stephen

    2013-01-01

    The severity of traffic-related injuries has been studied by many researchers in recent decades. However, the evaluation of many factors is still in dispute and, until this point, few studies have taken into account pavement management factors as points of interest. The objective of this article is to evaluate the combined influences of pavement management factors and traditional traffic engineering factors on the injury severity of 2-vehicle crashes. This study examines 2-vehicle rear-end, sideswipe, and angle collisions that occurred on Tennessee state routes from 2004 to 2008. Both the traditional ordered probit (OP) model and Bayesian ordered probit (BOP) model with weak informative prior were fitted for each collision type. The performances of these models were evaluated based on the parameter estimates and deviances. The results indicated that pavement management factors played identical roles in all 3 collision types. Pavement serviceability produces significant positive effects on the severity of injuries. The pavement distress index (PDI), rutting depth (RD), and rutting depth difference between right and left wheels (RD_df) were not significant in any of these 3 collision types. The effects of traffic engineering factors varied across collision types, except that a few were consistently significant in all 3 collision types, such as annual average daily traffic (AADT), rural-urban location, speed limit, peaking hour, and light condition. The findings of this study indicated that improved pavement quality does not necessarily lessen the severity of injuries when a 2-vehicle crash occurs. The effects of traffic engineering factors are not universal but vary by the type of crash. The study also found that the BOP model with a weak informative prior can be used as an alternative but was not superior to the traditional OP model in terms of overall performance.

  2. A Thermal Equilibrium Analysis of Line Contact Hydrodynamic Lubrication Considering the Influences of Reynolds Number, Load and Temperature

    PubMed Central

    Yu, Xiaoli; Sun, Zheng; Huang, Rui; Zhang, Yu; Huang, Yuqi

    2015-01-01

    Thermal effects such as conduction, convection and viscous dissipation are important to lubrication performance, and they vary with the friction conditions. These variations have caused some inconsistencies in the conclusions of different researchers regarding the relative contributions of these thermal effects. To reveal the relationship between the contributions of the thermal effects and the friction conditions, a steady-state THD analysis model was presented. The results indicate that the contribution of each thermal effect sharply varies with the Reynolds number and temperature. Convective effect could be dominant under certain conditions. Additionally, the accuracy of some simplified methods of thermo-hydrodynamic analysis is further discussed. PMID:26244665

  3. A Thermal Equilibrium Analysis of Line Contact Hydrodynamic Lubrication Considering the Influences of Reynolds Number, Load and Temperature.

    PubMed

    Yu, Xiaoli; Sun, Zheng; Huang, Rui; Zhang, Yu; Huang, Yuqi

    2015-01-01

    Thermal effects such as conduction, convection and viscous dissipation are important to lubrication performance, and they vary with the friction conditions. These variations have caused some inconsistencies in the conclusions of different researchers regarding the relative contributions of these thermal effects. To reveal the relationship between the contributions of the thermal effects and the friction conditions, a steady-state THD analysis model was presented. The results indicate that the contribution of each thermal effect sharply varies with the Reynolds number and temperature. Convective effect could be dominant under certain conditions. Additionally, the accuracy of some simplified methods of thermo-hydrodynamic analysis is further discussed.

  4. Degradation of lead-zirconate-titanate ceramics under different dc loads

    NASA Astrophysics Data System (ADS)

    Balke, Nina; Granzow, Torsten; Rödel, Jürgen

    2009-05-01

    During poling and application in actuators, piezoelectric ceramics like lead-zirconate-titanate are exposed to static or cyclically varying electric fields, often leading to pronounced changes in the electromechanical properties. These fatigue phenomena depend on time, peak electric load, and temperature. Although this process impacts the performance of many actuator materials, its physical understanding remains elusive. This paper proposes a set of key experiments to systematically investigate the changes in the ferroelectric hysteresis, field-dependent relative permittivity, and piezoelectric coefficient after submitting the material to dc loads of varying amplitude and duration. The observed effects are explained based on a model of domain stabilization due to charge accumulation at domain boundaries.

  5. Modeling and Simulation of Turbulent Flows through a Solar Air Heater Having Square-Sectioned Transverse Rib Roughness on the Absorber Plate

    PubMed Central

    Yadav, Anil Singh; Bhagoria, J. L.

    2013-01-01

    Solar air heater is a type of heat exchanger which transforms solar radiation into heat energy. The thermal performance of conventional solar air heater has been found to be poor because of the low convective heat transfer coefficient from the absorber plate to the air. Use of artificial roughness on a surface is an effective technique to enhance the rate of heat transfer. A CFD-based investigation of turbulent flow through a solar air heater roughened with square-sectioned transverse rib roughness has been performed. Three different values of rib-pitch (P) and rib-height (e) have been taken such that the relative roughness pitch (P/e = 14.29) remains constant. The relative roughness height, e/D, varies from 0.021 to 0.06, and the Reynolds number, Re, varies from 3800 to 18,000. The results predicted by CFD show that the average heat transfer, average flow friction, and thermohydraulic performance parameter are strongly dependent on the relative roughness height. A maximum value of thermohydraulic performance parameter has been found to be 1.8 for the range of parameters investigated. Comparisons with previously published work have been performed and found to be in excellent agreement. PMID:24222752

  6. Modeling and simulation of turbulent flows through a solar air heater having square-sectioned transverse rib roughness on the absorber plate.

    PubMed

    Yadav, Anil Singh; Bhagoria, J L

    2013-01-01

    Solar air heater is a type of heat exchanger which transforms solar radiation into heat energy. The thermal performance of conventional solar air heater has been found to be poor because of the low convective heat transfer coefficient from the absorber plate to the air. Use of artificial roughness on a surface is an effective technique to enhance the rate of heat transfer. A CFD-based investigation of turbulent flow through a solar air heater roughened with square-sectioned transverse rib roughness has been performed. Three different values of rib-pitch (P) and rib-height (e) have been taken such that the relative roughness pitch (P/e = 14.29) remains constant. The relative roughness height, e/D, varies from 0.021 to 0.06, and the Reynolds number, Re, varies from 3800 to 18,000. The results predicted by CFD show that the average heat transfer, average flow friction, and thermohydraulic performance parameter are strongly dependent on the relative roughness height. A maximum value of thermohydraulic performance parameter has been found to be 1.8 for the range of parameters investigated. Comparisons with previously published work have been performed and found to be in excellent agreement.

  7. Species distribution models: A comparison of statistical approaches for livestock and disease epidemics.

    PubMed

    Hollings, Tracey; Robinson, Andrew; van Andel, Mary; Jewell, Chris; Burgman, Mark

    2017-01-01

    In livestock industries, reliable up-to-date spatial distribution and abundance records for animals and farms are critical for governments to manage and respond to risks. Yet few, if any, countries can afford to maintain comprehensive, up-to-date agricultural census data. Statistical modelling can be used as a proxy for such data but comparative modelling studies have rarely been undertaken for livestock populations. Widespread species, including livestock, can be difficult to model effectively due to complex spatial distributions that do not respond predictably to environmental gradients. We assessed three machine learning species distribution models (SDM) for their capacity to estimate national-level farm animal population numbers within property boundaries: boosted regression trees (BRT), random forests (RF) and K-nearest neighbour (K-NN). The models were built from a commercial livestock database and environmental and socio-economic predictor data for New Zealand. We used two spatial data stratifications to test (i) support for decision making in an emergency response situation, and (ii) the ability for the models to predict to new geographic regions. The performance of the three model types varied substantially, but the best performing models showed very high accuracy. BRTs had the best performance overall, but RF performed equally well or better in many simulations; RFs were superior at predicting livestock numbers for all but very large commercial farms. K-NN performed poorly relative to both RF and BRT in all simulations. The predictions of both multi species and single species models for farms and within hypothetical quarantine zones were very close to observed data. These models are generally applicable for livestock estimation with broad applications in disease risk modelling, biosecurity, policy and planning.

  8. Species distribution models: A comparison of statistical approaches for livestock and disease epidemics

    PubMed Central

    Robinson, Andrew; van Andel, Mary; Jewell, Chris; Burgman, Mark

    2017-01-01

    In livestock industries, reliable up-to-date spatial distribution and abundance records for animals and farms are critical for governments to manage and respond to risks. Yet few, if any, countries can afford to maintain comprehensive, up-to-date agricultural census data. Statistical modelling can be used as a proxy for such data but comparative modelling studies have rarely been undertaken for livestock populations. Widespread species, including livestock, can be difficult to model effectively due to complex spatial distributions that do not respond predictably to environmental gradients. We assessed three machine learning species distribution models (SDM) for their capacity to estimate national-level farm animal population numbers within property boundaries: boosted regression trees (BRT), random forests (RF) and K-nearest neighbour (K-NN). The models were built from a commercial livestock database and environmental and socio-economic predictor data for New Zealand. We used two spatial data stratifications to test (i) support for decision making in an emergency response situation, and (ii) the ability for the models to predict to new geographic regions. The performance of the three model types varied substantially, but the best performing models showed very high accuracy. BRTs had the best performance overall, but RF performed equally well or better in many simulations; RFs were superior at predicting livestock numbers for all but very large commercial farms. K-NN performed poorly relative to both RF and BRT in all simulations. The predictions of both multi species and single species models for farms and within hypothetical quarantine zones were very close to observed data. These models are generally applicable for livestock estimation with broad applications in disease risk modelling, biosecurity, policy and planning. PMID:28837685

  9. Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.

    PubMed

    Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo

    2013-01-01

    To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.

  10. Simplified human thermoregulatory model for designing wearable thermoelectric devices

    NASA Astrophysics Data System (ADS)

    Wijethunge, Dimuthu; Kim, Donggyu; Kim, Woochul

    2018-02-01

    Research on wearable and implantable devices have become popular with the strong need in market. A precise understanding of the thermal properties of human skin, which are not constant values but vary depending on ambient condition, is required for the development of such devices. In this paper, we present simplified human thermoregulatory model for accurately estimating the thermal properties of the skin without applying rigorous calculations. The proposed model considers a variable blood flow rate through the skin, evaporation functions, and a variable convection heat transfer from the skin surface. In addition, wearable thermoelectric generation (TEG) and refrigeration devices were simulated. We found that deviations of 10-60% can be resulted in estimating TEG performance without considering human thermoregulatory model owing to the fact that thermal resistance of human skin is adapted to ambient condition. Simplicity of the modeling procedure presented in this work could be beneficial for optimizing and predicting the performance of any applications that are directly coupled with skin thermal properties.

  11. A computer simulation of the turbocharged turbo compounded diesel engine system: A description of the thermodynamic and heat transfer models

    NASA Technical Reports Server (NTRS)

    Assanis, D. N.; Ekchian, J. E.; Frank, R. M.; Heywood, J. B.

    1985-01-01

    A computer simulation of the turbocharged turbocompounded direct-injection diesel engine system was developed in order to study the performance characteristics of the total system as major design parameters and materials are varied. Quasi-steady flow models of the compressor, turbines, manifolds, intercooler, and ducting are coupled with a multicylinder reciprocator diesel model, where each cylinder undergoes the same thermodynamic cycle. The master cylinder model describes the reciprocator intake, compression, combustion and exhaust processes in sufficient detail to define the mass and energy transfers in each subsystem of the total engine system. Appropriate thermal loading models relate the heat flow through critical system components to material properties and design details. From this information, the simulation predicts the performance gains, and assesses the system design trade-offs which would result from the introduction of selected heat transfer reduction materials in key system components, over a range of operating conditions.

  12. Probabilities and predictions: modeling the development of scientific problem-solving skills.

    PubMed

    Stevens, Ron; Johnson, David F; Soller, Amy

    2005-01-01

    The IMMEX (Interactive Multi-Media Exercises) Web-based problem set platform enables the online delivery of complex, multimedia simulations, the rapid collection of student performance data, and has already been used in several genetic simulations. The next step is the use of these data to understand and improve student learning in a formative manner. This article describes the development of probabilistic models of undergraduate student problem solving in molecular genetics that detailed the spectrum of strategies students used when problem solving, and how the strategic approaches evolved with experience. The actions of 776 university sophomore biology majors from three molecular biology lecture courses were recorded and analyzed. Each of six simulations were first grouped by artificial neural network clustering to provide individual performance measures, and then sequences of these performances were probabilistically modeled by hidden Markov modeling to provide measures of progress. The models showed that students with different initial problem-solving abilities choose different strategies. Initial and final strategies varied across different sections of the same course and were not strongly correlated with other achievement measures. In contrast to previous studies, we observed no significant gender differences. We suggest that instructor interventions based on early student performances with these simulations may assist students to recognize effective and efficient problem-solving strategies and enhance learning.

  13. Integrating in silico models to enhance predictivity for developmental toxicity.

    PubMed

    Marzo, Marco; Kulkarni, Sunil; Manganaro, Alberto; Roncaglioni, Alessandra; Wu, Shengde; Barton-Maclaren, Tara S; Lester, Cathy; Benfenati, Emilio

    2016-08-31

    Application of in silico models to predict developmental toxicity has demonstrated limited success particularly when employed as a single source of information. It is acknowledged that modelling the complex outcomes related to this endpoint is a challenge; however, such models have been developed and reported in the literature. The current study explored the possibility of integrating the selected public domain models (CAESAR, SARpy and P&G model) with the selected commercial modelling suites (Multicase, Leadscope and Derek Nexus) to assess if there is an increase in overall predictive performance. The results varied according to the data sets used to assess performance which improved upon model integration relative to individual models. Moreover, because different models are based on different specific developmental toxicity effects, integration of these models increased the applicable chemical and biological spaces. It is suggested that this approach reduces uncertainty associated with in silico predictions by achieving a consensus among a battery of models. The use of tools to assess the applicability domain also improves the interpretation of the predictions. This has been verified in the case of the software VEGA, which makes freely available QSAR models with a measurement of the applicability domain. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. An eleven-year validation of a physically-based distributed dynamic ecohydorological model tRIBS+VEGGIE: Walnut Gulch Experimental Watershed

    NASA Astrophysics Data System (ADS)

    Sivandran, G.; Bisht, G.; Ivanov, V. Y.; Bras, R. L.

    2008-12-01

    A coupled, dynamic vegetation and hydrologic model, tRIBS+VEGGIE, was applied to the semiarid Walnut Gulch Experimental Watershed in Arizona. The physically-based, distributed nature of the coupled model allows for parameterization and simulation of watershed vegetation-water-energy dynamics on timescales varying from hourly to interannual. The model also allows for explicit spatial representation of processes that vary due to complex topography, such as lateral redistribution of moisture and partitioning of radiation with respect to aspect and slope. Model parameterization and forcing was conducted using readily available databases for topography, soil types, and land use cover as well as the data from network of meteorological stations located within the Walnut Gulch watershed. In order to test the performance of the model, three sets of simulations were conducted over an 11 year period from 1997 to 2007. Two simulations focus on heavily instrumented nested watersheds within the Walnut Gulch basin; (i) Kendall watershed, which is dominated by annual grasses; and (ii) Lucky Hills watershed, which is dominated by a mixture of deciduous and evergreen shrubs. The third set of simulations cover the entire Walnut Gulch Watershed. Model validation and performance were evaluated in relation to three broad categories; (i) energy balance components: the network of meteorological stations were used to validate the key energy fluxes; (ii) water balance components: the network of flumes, rain gauges and soil moisture stations installed within the watershed were utilized to validate the manner in which the model partitions moisture; and (iii) vegetation dynamics: remote sensing products from MODIS were used to validate spatial and temporal vegetation dynamics. Model results demonstrate satisfactory spatial and temporal agreement with observed data, giving confidence that key ecohydrological processes can be adequately represented for future applications of tRIBS+VEGGIE in regional modeling of land-atmosphere interactions.

  15. Fish species of greatest conservation need in wadeable Iowa streams: current status and effectiveness of Aquatic Gap Program distribution models

    USGS Publications Warehouse

    Sindt, Anthony R.; Pierce, Clay; Quist, Michael C.

    2012-01-01

    Effective conservation of fish species of greatest conservation need (SGCN) requires an understanding of species–habitat relationships and distributional trends. Thus, modeling the distribution of fish species across large spatial scales may be a valuable tool for conservation planning. Our goals were to evaluate the status of 10 fish SGCN in wadeable Iowa streams and to test the effectiveness of Iowa Aquatic Gap Analysis Project (IAGAP) species distribution models. We sampled fish assemblages from 86 wadeable stream segments in the Mississippi River drainage of Iowa during 2009 and 2010 to provide contemporary, independent fish species presence–absence data. The frequencies of occurrence in stream segments where species were historically documented varied from 0.0% for redfin shiner Lythrurus umbratilis to 100.0% for American brook lampreyLampetra appendix, with a mean of 53.0%, suggesting that the status of Iowa fish SGCN is highly variable. Cohen's kappa values and other model performance measures were calculated by comparing field-collected presence–absence data with IAGAP model–predicted presences and absences for 12 fish SGCN. Kappa values varied from 0.00 to 0.50, with a mean of 0.15. The models only predicted the occurrences of banded darterEtheostoma zonale, southern redbelly dace Phoxinus erythrogaster, and longnose daceRhinichthys cataractae more accurately than would be expected by chance. Overall, the accuracy of the twelve models was low, with a mean correct classification rate of 58.3%. Poor model performance probably reflects the difficulties associated with modeling the distribution of rare species and the inability of the large-scale habitat variables used in IAGAP models to explain the variation in fish species occurrences. Our results highlight the importance of quantifying the confidence in species distribution model predictions with an independent data set and the need for long-term monitoring to better understand the distributional trends and habitat associations of fish SGCN.

  16. Experimental Aerodynamic Characteristics of an Oblique Wing for the F-8 OWRA

    NASA Technical Reports Server (NTRS)

    Kennelly, Robert A., Jr.; Carmichael, Ralph L.; Smith, Stephen C.; Strong, James M.; Kroo, Ilan M.

    1999-01-01

    An experimental investigation was conducted during June-July 1987 in the NASA Ames 11-Foot Transonic Wind Tunnel to study the aerodynamic performance and stability and control characteristics of a 0.087-scale model of an F-8 airplane fitted with an oblique wing. This effort was part of the Oblique Wing Research Aircraft (OWRA) program performed in conjunction with Rockwell International. The Ames-designed, aspect ratio 10.47, tapered wing used specially designed supercritical airfoils with 0.14 thickness/chord ratio at the root and 0.12 at the 85% span location. The wing was tested at two different mounting heights above the fuselage. Performance and longitudinal stability data were obtained at sweep angles of 0deg, 30deg, 45deg, 60deg, and 65deg at Mach numbers ranging from 0.30 to 1.40. Reynolds number varied from 3.1 x 10(exp 6)to 5.2 x 10(exp 6), based on the reference chord length. Angle of attack was varied from -5deg to 18deg. The performance of this wing is compared with that of another oblique wing, designed by Rockwell International, which was tested as part of the same development program. Lateral-directional stability data were obtained for a limited combination of sweep angles and Mach numbers. Sideslip angle was varied from -5deg to +5deg. Landing flap performance was studied, as were the effects of cruise flap deflections to achieve roll trim and tailor wing camber for various flight conditions. Roll-control authority of the flaps and ailerons was measured. A novel, deflected wing tip was evaluated for roll-control authority at high sweep angles.

  17. Investigation of Neural Strategies of Visual Search

    NASA Technical Reports Server (NTRS)

    Krauzlis, Richard J.

    2003-01-01

    The goal of this project was to measure how neurons in the superior colliculus (SC) change their activity during a visual search task. Specifically, we proposed to measure how the activity of these neurons was altered by the discriminability of visual targets and to test how these changes might predict the changes in the subjects performance. The primary rationale for this study was that understanding how the information encoded by these neurons constrains overall search performance would foster the development of better models of human performance. Work performed during the period supported by this grant has achieved these aims. First, we have recorded from neurons in the superior colliculus (SC) during a visual search task in which the difficulty of the task and the performance of the subject was systematically varied. The results from these single-neuron physiology experiments shows that prior to eye movement onset, the difference in activity across the ensemble of neurons reaches a fixed threshold value, reflecting the operation of a winner-take-all mechanism. Second, we have developed a model of eye movement decisions based on the principle of winner-take-all . The model incorporates the idea that the overt saccade choice reflects only one of the multiple saccades prepared during visual discrimination, consistent with our physiological data. The value of the model is that, unlike previous models, it is able to account for both the latency and the percent correct of saccade choices.

  18. The effects of varied versus constant high-, medium-, and low-preference stimuli on performance.

    PubMed

    Wine, Byron; Wilder, David A

    2009-01-01

    The purpose of the current study was to compare the delivery of varied versus constant high-, medium-, and low-preference stimuli on performance of 2 adults on a computer-based task in an analogue employment setting. For both participants, constant delivery of the high-preference stimulus produced the greatest increases in performance over baseline; the varied presentation produced performance comparable to constant delivery of medium-preference stimuli. Results are discussed in terms of their implications for the selection and delivery of stimuli as part of employee performance-improvement programs in the field of organizational behavior management.

  19. Sensitivity of CO2 storage performance to varying rates and dynamic injectivity in the Bunter Sandstone, UK

    NASA Astrophysics Data System (ADS)

    Kolster, C.; Mac Dowell, N.; Krevor, S. C.; Agada, S.

    2016-12-01

    Carbon capture and storage (CCS) is needed for meeting legally binding greenhouse gas emissions targets in the UK (ECCC 2016). Energy systems models have been key to identifying the importance of CCS but they tend to impose few constraints on the availability and use of geologic CO2 storage reservoirs. Our aim is to develop simple models that use dynamic representations of limits on CO2 storage resources. This will allow for a first order representation of the storage reservoir for use in systems models with CCS. We use the ECLIPSE reservoir simulator and a model of the Southern North Sea Bunter Sandstone saline aquifer. We analyse reservoir performance sensitivities to scenarios of varying CO2 injection demand for a future UK low carbon energy market. With 12 injection sites, we compare the impact of injecting at a constant 2MtCO2/year per site and varying this rate by a factor of 1.8 and 0.2 cyclically every 5 and 2.5 years over 50 years of injection. The results show a maximum difference in average reservoir pressure of 3% amongst each case and a similar variation in plume migration extent. This suggests that simplified models can maintain accuracy by using average rates of injection over similar time periods. Meanwhile, by initiating injection at rates limited by pressurization at the wellhead we find that injectivity steadily increases. As a result, dynamic capacity increases. We find that instead of injecting into sites on a need basis, we can strategically inject the CO2 into 6 of the deepest sites increasing injectivity for the first 15 years by 13%. Our results show injectivity as highly dependent on reservoir heterogeneity near the injection site. Injecting 1MTCO2/year into a shallow, low permeability and porosity site instead of into a deep injection site with high permeability and porosity reduces injectivity in the first 5 years by 52%. ECCC. 2016. Future of Carbon Capture and Storage in the UK. UK Parliament House of Commons, Energy and Climate Change Committee, London: The Stationary Office Limited.

  20. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    PubMed

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  1. A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine.

    PubMed

    Ramakrishnan, Sridhar; Wesensten, Nancy J; Kamimori, Gary H; Moon, James E; Balkin, Thomas J; Reifman, Jaques

    2016-10-01

    Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% < error < 27%), yielding greater accuracy for mild and moderate sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. © 2016 Associated Professional Sleep Societies, LLC.

  2. Evaluation of a bulk calorimeter and heat balance for determination of supersonic combustor efficiency

    NASA Technical Reports Server (NTRS)

    Mcclinton, C. R.; Anderson, G. Y.

    1980-01-01

    Results are presented from the shakedown and evaluation test of a bulk calorimeter. The calorimeter is designed to quench the combustion at the exit of a direct-connect, hydrogen fueled, scramjet combustor model, and to provide the measurements necessary to perform an analysis of combustion efficiency. Results indicate that the calorimeter quenches reaction, that reasonable response times are obtained, and that the calculated combustion efficiency is repeatable within + or -3 percent and varies in a regular way with combustor model parameters such as injected fuel equivalence ratio.

  3. Performance analysis of Integrated Communication and Control System networks

    NASA Technical Reports Server (NTRS)

    Halevi, Y.; Ray, A.

    1990-01-01

    This paper presents statistical analysis of delays in Integrated Communication and Control System (ICCS) networks that are based on asynchronous time-division multiplexing. The models are obtained in closed form for analyzing control systems with randomly varying delays. The results of this research are applicable to ICCS design for complex dynamical processes like advanced aircraft and spacecraft, autonomous manufacturing plants, and chemical and processing plants.

  4. Practical Consequences of Item Response Theory Model Misfit in the Context of Test Equating with Mixed-Format Test Data

    PubMed Central

    Zhao, Yue; Hambleton, Ronald K.

    2017-01-01

    In item response theory (IRT) models, assessing model-data fit is an essential step in IRT calibration. While no general agreement has ever been reached on the best methods or approaches to use for detecting misfit, perhaps the more important comment based upon the research findings is that rarely does the research evaluate IRT misfit by focusing on the practical consequences of misfit. The study investigated the practical consequences of IRT model misfit in examining the equating performance and the classification of examinees into performance categories in a simulation study that mimics a typical large-scale statewide assessment program with mixed-format test data. The simulation study was implemented by varying three factors, including choice of IRT model, amount of growth/change of examinees’ abilities between two adjacent administration years, and choice of IRT scaling methods. Findings indicated that the extent of significant consequences of model misfit varied over the choice of model and IRT scaling methods. In comparison with mean/sigma (MS) and Stocking and Lord characteristic curve (SL) methods, separate calibration with linking and fixed common item parameter (FCIP) procedure was more sensitive to model misfit and more robust against various amounts of ability shifts between two adjacent administrations regardless of model fit. SL was generally the least sensitive to model misfit in recovering equating conversion and MS was the least robust against ability shifts in recovering the equating conversion when a substantial degree of misfit was present. The key messages from the study are that practical ways are available to study model fit, and, model fit or misfit can have consequences that should be considered when choosing an IRT model. Not only does the study address the consequences of IRT model misfit, but also it is our hope to help researchers and practitioners find practical ways to study model fit and to investigate the validity of particular IRT models for achieving a specified purpose, to assure that the successful use of the IRT models are realized, and to improve the applications of IRT models with educational and psychological test data. PMID:28421011

  5. Predicting macropores in space and time by earthworms and abiotic controls

    NASA Astrophysics Data System (ADS)

    Hohenbrink, Tobias Ludwig; Schneider, Anne-Kathrin; Zangerlé, Anne; Reck, Arne; Schröder, Boris; van Schaik, Loes

    2017-04-01

    Macropore flow increases infiltration and solute leaching. The macropore density and connectivity, and thereby the hydrological effectiveness, vary in space and time due to earthworms' burrowing activity and their ability to refill their burrows in order to survive drought periods. The aim of our study was to predict the spatiotemporal variability of macropore distributions by a set of potentially controlling abiotic variables and abundances of different earthworm species. We measured earthworm abundances and effective macropore distributions using tracer rainfall infiltration experiments in six measurement campaigns during one year at six field sites in Luxembourg. Hydrologically effective macropores were counted in three soil depths (3, 10, 30 cm) and distinguished into three diameter classes (<2, 2-6, >6 mm). Earthworms were sampled and determined to species-level. In a generalized linear modelling framework, we related macropores to potential spatial and temporal controlling factors. Earthworm species such as Lumbricus terrestris and Aporrectodea longa, local abiotic site conditions (land use, TWI, slope), temporally varying weather conditions (temperature, humidity, precipitation) and soil moisture affected the number of effective macropores. Main controlling factors and explanatory power of the models (uncertainty and model performance) varied depending on the depth and diameter class of macropores. We present spatiotemporal predictions of macropore density by daily-resolved, one year time series of macropore numbers and maps of macropore distributions at specific dates in a small-scale catchment with 5 m resolution.

  6. TOD to TTP calibration

    NASA Astrophysics Data System (ADS)

    Bijl, Piet; Reynolds, Joseph P.; Vos, Wouter K.; Hogervorst, Maarten A.; Fanning, Jonathan D.

    2011-05-01

    The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks.

  7. Modelling Nitrogen Oxides in Los Angeles Using a Hybrid Dispersion/Land Use Regression Model

    NASA Astrophysics Data System (ADS)

    Wilton, Darren C.

    The goal of this dissertation is to develop models capable of predicting long term annual average NOx concentrations in urban areas. Predictions from simple meteorological dispersion models and seasonal proxies for NO2 oxidation were included as covariates in a land use regression (LUR) model for NOx in Los Angeles, CA. The NO x measurements were obtained from a comprehensive measurement campaign that is part of the Multi-Ethnic Study of Atherosclerosis Air Pollution Study (MESA Air). Simple land use regression models were initially developed using a suite of GIS-derived land use variables developed from various buffer sizes (R²=0.15). Caline3, a simple steady-state Gaussian line source model, was initially incorporated into the land-use regression framework. The addition of this spatio-temporally varying Caline3 covariate improved the simple LUR model predictions. The extent of improvement was much more pronounced for models based solely on the summer measurements (simple LUR: R²=0.45; Caline3/LUR: R²=0.70), than it was for models based on all seasons (R²=0.20). We then used a Lagrangian dispersion model to convert static land use covariates for population density, commercial/industrial area into spatially and temporally varying covariates. The inclusion of these covariates resulted in significant improvement in model prediction (R²=0.57). In addition to the dispersion model covariates described above, a two-week average value of daily peak-hour ozone was included as a surrogate of the oxidation of NO2 during the different sampling periods. This additional covariate further improved overall model performance for all models. The best model by 10-fold cross validation (R²=0.73) contained the Caline3 prediction, a static covariate for length of A3 roads within 50 meters, the Calpuff-adjusted covariates derived from both population density and industrial/commercial land area, and the ozone covariate. This model was tested against annual average NOx concentrations from an independent data set from the EPA's Air Quality System (AQS) and MESA Air fixed site monitors, and performed very well (R²=0.82).

  8. Spacecraft Thermal and Optical Modeling Impacts on Estimation of the GRAIL Lunar Gravity Field

    NASA Technical Reports Server (NTRS)

    Fahnestock, Eugene G.; Park, Ryan S.; Yuan, Dah-Ning; Konopliv, Alex S.

    2012-01-01

    We summarize work performed involving thermo-optical modeling of the two Gravity Recovery And Interior Laboratory (GRAIL) spacecraft. We derived several reconciled spacecraft thermo-optical models having varying detail. We used the simplest in calculating SRP acceleration, and used the most detailed to calculate acceleration due to thermal re-radiation. For the latter, we used both the output of pre-launch finite-element-based thermal simulations and downlinked temperature sensor telemetry. The estimation process to recover the lunar gravity field utilizes both a nominal thermal re-radiation accleration history and an apriori error model derived from that plus an off-nominal history, which bounds parameter uncertainties as informed by sensitivity studies.

  9. Influence of Yield Stress Determination in Anisotropic Hardening Model on Springback Prediction in Dual-Phase Steel

    NASA Astrophysics Data System (ADS)

    Lee, J.; Bong, H. J.; Ha, J.; Choi, J.; Barlat, F.; Lee, M.-G.

    2018-05-01

    In this study, a numerical sensitivity analysis of the springback prediction was performed using advanced strain hardening models. In particular, the springback in U-draw bending for dual-phase 780 steel sheets was investigated while focusing on the effect of the initial yield stress determined from the cyclic loading tests. The anisotropic hardening models could reproduce the flow stress behavior under the non-proportional loading condition for the considered parametric cases. However, various identification schemes for determining the yield stress of the anisotropic hardening models significantly influenced the springback prediction. The deviations from the measured springback varied from 4% to 13.5% depending on the identification method.

  10. Development of robust building energy demand-side control strategy under uncertainty

    NASA Astrophysics Data System (ADS)

    Kim, Sean Hay

    The potential of carbon emission regulations applied to an individual building will encourage building owners to purchase utility-provided green power or to employ onsite renewable energy generation. As both cases are based on intermittent renewable energy sources, demand side control is a fundamental precondition for maximizing the effectiveness of using renewable energy sources. Such control leads to a reduction in peak demand and/or in energy demand variability, therefore, such reduction in the demand profile eventually enhances the efficiency of an erratic supply of renewable energy. The combined operation of active thermal energy storage and passive building thermal mass has shown substantial improvement in demand-side control performance when compared to current state-of-the-art demand-side control measures. Specifically, "model-based" optimal control for this operation has the potential to significantly increase performance and bring economic advantages. However, due to the uncertainty in certain operating conditions in the field its control effectiveness could be diminished and/or seriously damaged, which results in poor performance. This dissertation pursues improvements of current demand-side controls under uncertainty by proposing a robust supervisory demand-side control strategy that is designed to be immune from uncertainty and perform consistently under uncertain conditions. Uniqueness and superiority of the proposed robust demand-side controls are found as below: a. It is developed based on fundamental studies about uncertainty and a systematic approach to uncertainty analysis. b. It reduces variability of performance under varied conditions, and thus avoids the worst case scenario. c. It is reactive in cases of critical "discrepancies" observed caused by the unpredictable uncertainty that typically scenario uncertainty imposes, and thus it increases control efficiency. This is obtainable by means of i) multi-source composition of weather forecasts including both historical archive and online sources and ii) adaptive Multiple model-based controls (MMC) to mitigate detrimental impacts of varying scenario uncertainties. The proposed robust demand-side control strategy verifies its outstanding demand-side control performance in varied and non-indigenous conditions compared to the existing control strategies including deterministic optimal controls. This result reemphasizes importance of the demand-side control for a building in the global carbon economy. It also demonstrates a capability of risk management of the proposed robust demand-side controls in highly uncertain situations, which eventually attains the maximum benefit in both theoretical and practical perspectives.

  11. Structural damage detection based on stochastic subspace identification and statistical pattern recognition: II. Experimental validation under varying temperature

    NASA Astrophysics Data System (ADS)

    Lin, Y. Q.; Ren, W. X.; Fang, S. E.

    2011-11-01

    Although most vibration-based damage detection methods can acquire satisfactory verification on analytical or numerical structures, most of them may encounter problems when applied to real-world structures under varying environments. The damage detection methods that directly extract damage features from the periodically sampled dynamic time history response measurements are desirable but relevant research and field application verification are still lacking. In this second part of a two-part paper, the robustness and performance of the statistics-based damage index using the forward innovation model by stochastic subspace identification of a vibrating structure proposed in the first part have been investigated against two prestressed reinforced concrete (RC) beams tested in the laboratory and a full-scale RC arch bridge tested in the field under varying environments. Experimental verification is focused on temperature effects. It is demonstrated that the proposed statistics-based damage index is insensitive to temperature variations but sensitive to the structural deterioration or state alteration. This makes it possible to detect the structural damage for the real-scale structures experiencing ambient excitations and varying environmental conditions.

  12. Self-organization of head-centered visual responses under ecological training conditions.

    PubMed

    Mender, Bedeho M W; Stringer, Simon M

    2014-01-01

    We have studied the development of head-centered visual responses in an unsupervised self-organizing neural network model which was trained under ecological training conditions. Four independent spatio-temporal characteristics of the training stimuli were explored to investigate the feasibility of the self-organization under more ecological conditions. First, the number of head-centered visual training locations was varied over a broad range. Model performance improved as the number of training locations approached the continuous sampling of head-centered space. Second, the model depended on periods of time where visual targets remained stationary in head-centered space while it performed saccades around the scene, and the severity of this constraint was explored by introducing increasing levels of random eye movement and stimulus dynamics. Model performance was robust over a range of randomization. Third, the model was trained on visual scenes where multiple simultaneous targets where always visible. Model self-organization was successful, despite never being exposed to a visual target in isolation. Fourth, the duration of fixations during training were made stochastic. With suitable changes to the learning rule, it self-organized successfully. These findings suggest that the fundamental learning mechanism upon which the model rests is robust to the many forms of stimulus variability under ecological training conditions.

  13. A comparison of the lattice discrete particle method to the finite-element method and the K&C material model for simulating the static and dynamic response of concrete.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Jovanca J.; Bishop, Joseph E.

    2013-11-01

    This report summarizes the work performed by the graduate student Jovanca Smith during a summer internship in the summer of 2012 with the aid of mentor Joe Bishop. The projects were a two-part endeavor that focused on the use of the numerical model called the Lattice Discrete Particle Model (LDPM). The LDPM is a discrete meso-scale model currently used at Northwestern University and the ERDC to model the heterogeneous quasi-brittle material, concrete. In the first part of the project, LDPM was compared to the Karagozian and Case Concrete Model (K&C) used in Presto, an explicit dynamics finite-element code, developed atmore » Sandia National Laboratories. In order to make this comparison, a series of quasi-static numerical experiments were performed, namely unconfined uniaxial compression tests on four varied cube specimen sizes, three-point bending notched experiments on three proportional specimen sizes, and six triaxial compression tests on a cylindrical specimen. The second part of this project focused on the application of LDPM to simulate projectile perforation on an ultra high performance concrete called CORTUF. This application illustrates the strengths of LDPM over traditional continuum models.« less

  14. Hybrid ray-FDTD model for the simulation of the ultrasonic inspection of CFRP parts

    NASA Astrophysics Data System (ADS)

    Jezzine, Karim; Ségur, Damien; Ecault, Romain; Dominguez, Nicolas; Calmon, Pierre

    2017-02-01

    Carbon Fiber Reinforced Polymers (CFRP) are commonly used in structural parts in the aeronautic industry, to reduce the weight of aircraft while maintaining high mechanical performances. Simulation of the ultrasonic inspections of these parts has to face the highly heterogeneous and anisotropic characteristics of these materials. To model the propagation of ultrasound in these composite structures, we propose two complementary approaches. The first one is based on a ray model predicting the propagation of the ultrasound in an anisotropic effective medium obtained from a homogenization of the material. The ray model is designed to deal with possibly curved parts and subsequent continuously varying anisotropic orientations. The second approach is based on the coupling of the ray model, and a finite difference scheme in time domain (FDTD). The ray model handles the ultrasonic propagation between the transducer and the FDTD computation zone that surrounds the composite part. In this way, the computational efficiency is preserved and the ultrasound scattering by the composite structure can be predicted. Inspections of flat or curved composite panels, as well as stiffeners can be performed. The models have been implemented in the CIVA software platform and compared to experiments. We also present an application of the simulation to the performance demonstration of the adaptive inspection technique SAUL (Surface Adaptive Ultrasound).

  15. Assessing variable rate nitrogen fertilizer strategies within an extensively instrument field site using the MicroBasin model

    NASA Astrophysics Data System (ADS)

    Ward, N. K.; Maureira, F.; Yourek, M. A.; Brooks, E. S.; Stockle, C. O.

    2014-12-01

    The current use of synthetic nitrogen fertilizers in agriculture has many negative environmental and economic costs, necessitating improved nitrogen management. In the highly heterogeneous landscape of the Palouse region in eastern Washington and northern Idaho, crop nitrogen needs vary widely within a field. Site-specific nitrogen management is a promising strategy to reduce excess nitrogen lost to the environment while maintaining current yields by matching crop needs with inputs. This study used in-situ hydrologic, nutrient, and crop yield data from a heavily instrumented field site in the high precipitation zone of the wheat-producing Palouse region to assess the performance of the MicroBasin model. MicroBasin is a high-resolution watershed-scale ecohydrologic model with nutrient cycling and cropping algorithms based on the CropSyst model. Detailed soil mapping conducted at the site was used to parameterize the model and the model outputs were evaluated with observed measurements. The calibrated MicroBasin model was then used to evaluate the impact of various nitrogen management strategies on crop yield and nitrate losses. The strategies include uniform application as well as delineating the field into multiple zones of varying nitrogen fertilizer rates to optimize nitrogen use efficiency. We present how coupled modeling and in-situ data sets can inform agricultural management and policy to encourage improved nitrogen management.

  16. Shear Behavior Models of Steel Fiber Reinforced Concrete Beams Modifying Softened Truss Model Approaches.

    PubMed

    Hwang, Jin-Ha; Lee, Deuck Hang; Ju, Hyunjin; Kim, Kang Su; Seo, Soo-Yeon; Kang, Joo-Won

    2013-10-23

    Recognizing that steel fibers can supplement the brittle tensile characteristics of concrete, many studies have been conducted on the shear performance of steel fiber reinforced concrete (SFRC) members. However, previous studies were mostly focused on the shear strength and proposed empirical shear strength equations based on their experimental results. Thus, this study attempts to estimate the strains and stresses in steel fibers by considering the detailed characteristics of steel fibers in SFRC members, from which more accurate estimation on the shear behavior and strength of SFRC members is possible, and the failure mode of steel fibers can be also identified. Four shear behavior models for SFRC members have been proposed, which have been modified from the softened truss models for reinforced concrete members, and they can estimate the contribution of steel fibers to the total shear strength of the SFRC member. The performances of all the models proposed in this study were also evaluated by a large number of test results. The contribution of steel fibers to the shear strength varied from 5% to 50% according to their amount, and the most optimized volume fraction of steel fibers was estimated as 1%-1.5%, in terms of shear performance.

  17. Green roof hydrologic performance and modeling: a review.

    PubMed

    Li, Yanling; Babcock, Roger W

    2014-01-01

    Green roofs reduce runoff from impervious surfaces in urban development. This paper reviews the technical literature on green roof hydrology. Laboratory experiments and field measurements have shown that green roofs can reduce stormwater runoff volume by 30 to 86%, reduce peak flow rate by 22 to 93% and delay the peak flow by 0 to 30 min and thereby decrease pollution, flooding and erosion during precipitation events. However, the effectiveness can vary substantially due to design characteristics making performance predictions difficult. Evaluation of the most recently published study findings indicates that the major factors affecting green roof hydrology are precipitation volume, precipitation dynamics, antecedent conditions, growth medium, plant species, and roof slope. This paper also evaluates the computer models commonly used to simulate hydrologic processes for green roofs, including stormwater management model, soil water atmosphere and plant, SWMS-2D, HYDRUS, and other models that are shown to be effective for predicting precipitation response and economic benefits. The review findings indicate that green roofs are effective for reduction of runoff volume and peak flow, and delay of peak flow, however, no tool or model is available to predict expected performance for any given anticipated system based on design parameters that directly affect green roof hydrology.

  18. Dynamic Factor Analysis Models with Time-Varying Parameters

    ERIC Educational Resources Information Center

    Chow, Sy-Miin; Zu, Jiyun; Shifren, Kim; Zhang, Guangjian

    2011-01-01

    Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor…

  19. Recycling production designs: the value of coordination and flexibility in aluminum recycling operations

    NASA Astrophysics Data System (ADS)

    Brommer, Tracey H.

    The growing motivation for aluminum recycling has prompted interest in recycling alternative and more challenging secondary materials. The nature of these alternative secondary materials necessitates the development of an intermediate recycling facility that can reprocess the secondary materials into a liquid product Two downstream aluminum remelters will incorporate the liquid products into their aluminum alloy production schedules. Energy and environmental benefits result from delivering the products as liquid but coordination challenges persist because of the energy cost to maintain the liquid. Further coordination challenges result from the necessity to establish a long term recycling production plan in the presence of long term downstream aluminum remelter production uncertainty and inherent variation in the daily order schedule of the downstream aluminum remelters. In this context a fundamental question arises, considering the metallurgical complexities of dross reprocessing, what is the value of operating a coordinated set of by-product reprocessing plants and remelting cast houses? A methodology is presented to calculate the optimal recycling center production parameters including 1) the number of recycled products, 2) the volume of recycled products, 3) allocation of recycled materials across recycled products, 4) allocation of recycled products across finished alloys, 4) the level of flexibility for the recycling center to operate. The methods implemented include, 1) an optimization model to describe the long term operations of the recycling center, 2) an uncertainty simulation tool, 3) a simulation optimization method, 4) a dynamic simulation tool with four embedded daily production optimization models of varying degrees of flexibility. This methodology is used to quantify the performance of several recycling center production designs of varying levels of coordination and flexibility. This analysis allowed the identification of the optimal recycling center production design based on maximizing liquid recycled product incorporation and minimizing cast sows. The long term production optimization model was used to evaluate the theoretical viability of the proposed two stage scrap and aluminum dross reprocessing operation including the impact of reducing coordination on model performance. Reducing the coordination between the recycling center and downstream remelters by reducing the number of recycled products from ten to five resulted in only 1.3% less secondary materials incorporated into downstream production. The dynamic simulation tool was used to evaluate the performance of the calculated recycling center production plan when resolved on a daily timeframe for varying levels of operational flexibility. The dynamic simulation revealed the optimal performance corresponded to the fixed recipe with flexible production daily optimization model formulation. Calculating recycled product characteristics using the proposed simulation optimization method increased profitability in cases of uncertain downstream remelter production and expensive aluminum dross and post-consumed secondary materials. (Copies available exclusively from MIT Libraries, libraries.mit.edu/docs - docs@mit.edu)

  20. A dynamic growth model of vegetative soya bean plants: model structure and behaviour under varying root temperature and nitrogen concentration

    NASA Technical Reports Server (NTRS)

    Lim, J. T.; Wilkerson, G. G.; Raper, C. D. Jr; Gold, H. J.

    1990-01-01

    A differential equation model of vegetative growth of the soya bean plant (Glycine max (L.) Merrill cv. Ransom') was developed to account for plant growth in a phytotron system under variation of root temperature and nitrogen concentration in nutrient solution. The model was tested by comparing model outputs with data from four different experiments. Model predictions agreed fairly well with measured plant performance over a wide range of root temperatures and over a range of nitrogen concentrations in nutrient solution between 0.5 and 10.0 mmol NO3- in the phytotron environment. Sensitivity analyses revealed that the model was most sensitive to changes in parameters relating to carbohydrate concentration in the plant and nitrogen uptake rate.

Top