Rapid performance modeling and parameter regression of geodynamic models
NASA Astrophysics Data System (ADS)
Brown, J.; Duplyakin, D.
2016-12-01
Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.
Music performance and the perception of key.
Thompson, W F; Cuddy, L L
1997-02-01
The effect of music performance on perceived key movement was examined. Listeners judged key movement in sequences presented without performance expression (mechanical) in Experiment 1 and with performance expression in Experiment 2. Modulation distance varied. Judgments corresponded to predictions based on the cycle of fifths and toroidal models of key relatedness, with the highest correspondence for performed versions with the toroidal model. In Experiment 3, listeners compared mechanical sequences with either performed sequences or modifications of performed sequences. Modifications preserved expressive differences between chords, but not between voices. Predictions from Experiments 1 and 2 held only for performed sequences, suggesting that differences between voices are informative of key movement. Experiment 4 confirmed that modifications did not disrupt musicality. Analyses of performances further suggested a link between performance expression and key.
Evaluation of annual, global seismicity forecasts, including ensemble models
NASA Astrophysics Data System (ADS)
Taroni, Matteo; Zechar, Jeremy; Marzocchi, Warner
2013-04-01
In 2009, the Collaboratory for the Study of the Earthquake Predictability (CSEP) initiated a prototype global earthquake forecast experiment. Three models participated in this experiment for 2009, 2010 and 2011—each model forecast the number of earthquakes above magnitude 6 in 1x1 degree cells that span the globe. Here we use likelihood-based metrics to evaluate the consistency of the forecasts with the observed seismicity. We compare model performance with statistical tests and a new method based on the peer-to-peer gambling score. The results of the comparisons are used to build ensemble models that are a weighted combination of the individual models. Notably, in these experiments the ensemble model always performs significantly better than the single best-performing model. Our results indicate the following: i) time-varying forecasts, if not updated after each major shock, may not provide significant advantages with respect to time-invariant models in 1-year forecast experiments; ii) the spatial distribution seems to be the most important feature to characterize the different forecasting performances of the models; iii) the interpretation of consistency tests may be misleading because some good models may be rejected while trivial models may pass consistency tests; iv) a proper ensemble modeling seems to be a valuable procedure to get the best performing model for practical purposes.
A model of clutter for complex, multivariate geospatial displays.
Lohrenz, Maura C; Trafton, J Gregory; Beck, R Melissa; Gendron, Marlin L
2009-02-01
A novel model of measuring clutter in complex geospatial displays was compared with human ratings of subjective clutter as a measure of convergent validity. The new model is called the color-clustering clutter (C3) model. Clutter is a known problem in displays of complex data and has been shown to affect target search performance. Previous clutter models are discussed and compared with the C3 model. Two experiments were performed. In Experiment 1, participants performed subjective clutter ratings on six classes of information visualizations. Empirical results were used to set two free parameters in the model. In Experiment 2, participants performed subjective clutter ratings on aeronautical charts. Both experiments compared and correlated empirical data to model predictions. The first experiment resulted in a .76 correlation between ratings and C3. The second experiment resulted in a .86 correlation, significantly better than results from a model developed by Rosenholtz et al. Outliers to our correlation suggest further improvements to C3. We suggest that (a) the C3 model is a good predictor of subjective impressions of clutter in geospatial displays, (b) geospatial clutter is a function of color density and saliency (primary C3 components), and (c) pattern analysis techniques could further improve C3. The C3 model could be used to improve the design of electronic geospatial displays by suggesting when a display will be too cluttered for its intended audience.
Validating models of target acquisition performance in the dismounted soldier context
NASA Astrophysics Data System (ADS)
Glaholt, Mackenzie G.; Wong, Rachel K.; Hollands, Justin G.
2018-04-01
The problem of predicting real-world operator performance with digital imaging devices is of great interest within the military and commercial domains. There are several approaches to this problem, including: field trials with imaging devices, laboratory experiments using imagery captured from these devices, and models that predict human performance based on imaging device parameters. The modeling approach is desirable, as both field trials and laboratory experiments are costly and time-consuming. However, the data from these experiments is required for model validation. Here we considered this problem in the context of dismounted soldiering, for which detection and identification of human targets are essential tasks. Human performance data were obtained for two-alternative detection and identification decisions in a laboratory experiment in which photographs of human targets were presented on a computer monitor and the images were digitally magnified to simulate range-to-target. We then compared the predictions of different performance models within the NV-IPM software package: Targeting Task Performance (TTP) metric model and the Johnson model. We also introduced a modification to the TTP metric computation that incorporates an additional correction for target angular size. We examined model predictions using NV-IPM default values for a critical model constant, V50, and we also considered predictions when this value was optimized to fit the behavioral data. When using default values, certain model versions produced a reasonably close fit to the human performance data in the detection task, while for the identification task all models substantially overestimated performance. When using fitted V50 values the models produced improved predictions, though the slopes of the performance functions were still shallow compared to the behavioral data. These findings are discussed in relation to the models' designs and parameters, and the characteristics of the behavioral paradigm.
Development of an Implantable WBAN Path-Loss Model for Capsule Endoscopy
NASA Astrophysics Data System (ADS)
Aoyagi, Takahiro; Takizawa, Kenichi; Kobayashi, Takehiko; Takada, Jun-Ichi; Hamaguchi, Kiyoshi; Kohno, Ryuji
An implantable WBAN path-loss model for a capsule endoscopy which is used for examining digestive organs, is developed by conducting simulations and experiments. First, we performed FDTD simulations on implant WBAN propagation by using a numerical human model. Second, we performed FDTD simulations on a vessel that represents the human body. Third, we performed experiments using a vessel of the same dimensions as that used in the simulations. On the basis of the results of these simulations and experiments, we proposed the gradient and intercept parameters of the simple path-loss in-body propagation model.
Automatic reactor model synthesis with genetic programming.
Dürrenmatt, David J; Gujer, Willi
2012-01-01
Successful modeling of wastewater treatment plant (WWTP) processes requires an accurate description of the plant hydraulics. Common methods such as tracer experiments are difficult and costly and thus have limited applicability in practice; engineers are often forced to rely on their experience only. An implementation of grammar-based genetic programming with an encoding to represent hydraulic reactor models as program trees should fill this gap: The encoding enables the algorithm to construct arbitrary reactor models compatible with common software used for WWTP modeling by linking building blocks, such as continuous stirred-tank reactors. Discharge measurements and influent and effluent concentrations are the only required inputs. As shown in a synthetic example, the technique can be used to identify a set of reactor models that perform equally well. Instead of being guided by experience, the most suitable model can now be chosen by the engineer from the set. In a second example, temperature measurements at the influent and effluent of a primary clarifier are used to generate a reactor model. A virtual tracer experiment performed on the reactor model has good agreement with a tracer experiment performed on-site.
NASA Technical Reports Server (NTRS)
Sebok, Angelia; Wickens, Christopher; Sargent, Robert
2015-01-01
One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.
Hypnosis in sport: an Isomorphic Model.
Robazza, C; Bortoli, L
1994-10-01
Hypnosis in sport can be applied according to an Isomorphic Model. Active-alert hypnosis is induced before or during practice whereas traditional hypnosis is induced after practice to establish connections between the two experiences. The fundamental goals are to (a) develop mental skills important to both motor and hypnotic performance, (b) supply a wide range of motor and hypnotic bodily experiences important to performance, and (c) induce alert hypnosis before or during performance. The model is based on the assumption that hypnosis and motor performance share common skills modifiable through training. Similarities between hypnosis and peak performance in the model are also considered. Some predictions are important from theoretical and practical points of view.
U.S. perspective on technology demonstration experiments for adaptive structures
NASA Technical Reports Server (NTRS)
Aswani, Mohan; Wada, Ben K.; Garba, John A.
1991-01-01
Evaluation of design concepts for adaptive structures is being performed in support of several focused research programs. These include programs such as Precision Segmented Reflector (PSR), Control Structure Interaction (CSI), and the Advanced Space Structures Technology Research Experiment (ASTREX). Although not specifically designed for adaptive structure technology validation, relevant experiments can be performed using the Passive and Active Control of Space Structures (PACOSS) testbed, the Space Integrated Controls Experiment (SPICE), the CSI Evolutionary Model (CEM), and the Dynamic Scale Model Test (DSMT) Hybrid Scale. In addition to the ground test experiments, several space flight experiments have been planned, including a reduced gravity experiment aboard the KC-135 aircraft, shuttle middeck experiments, and the Inexpensive Flight Experiment (INFLEX).
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
Psychological distance reduces literal imitation: Evidence from an imitation-learning paradigm.
Hansen, Jochim; Alves, Hans; Trope, Yaacov
2016-03-01
The present experiments tested the hypothesis that observers engage in more literal imitation of a model when the model is psychologically near to (vs. distant from) the observer. Participants learned to fold a dog out of towels by watching a model performing this task. Temporal (Experiment 1) and spatial (Experiment 2) distance from the model were manipulated. As predicted, participants copied more of the model's specific movements when the model was near (vs. distant). Experiment 3 replicated this finding with a paper-folding task, suggesting that distance from a model also affects imitation of less complex tasks. Perceived task difficulty, motivation, and the quality of the end product were not affected by distance. We interpret the findings as reflecting different levels of construal of the model's performance: When the model is psychologically distant, social learners focus more on the model's goal and devise their own means for achieving the goal, and as a result show less literal imitation of the model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.
Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less
Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.
2017-11-29
Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less
NASA Astrophysics Data System (ADS)
Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.
2017-11-01
Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable-region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observational dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.
NASA Astrophysics Data System (ADS)
Solman, Silvina A.; Pessacg, Natalia L.
2012-01-01
In this study the capability of the MM5 model in simulating the main mode of intraseasonal variability during the warm season over South America is evaluated through a series of sensitivity experiments. Several 3-month simulations nested into ERA40 reanalysis were carried out using different cumulus schemes and planetary boundary layer schemes in an attempt to define the optimal combination of physical parameterizations for simulating alternating wet and dry conditions over La Plata Basin (LPB) and the South Atlantic Convergence Zone regions, respectively. The results were compared with different observational datasets and model evaluation was performed taking into account the spatial distribution of monthly precipitation and daily statistics of precipitation over the target regions. Though every experiment was able to capture the contrasting behavior of the precipitation during the simulated period, precipitation was largely underestimated particularly over the LPB region, mainly due to a misrepresentation in the moisture flux convergence. Experiments using grid nudging of the winds above the planetary boundary layer showed a better performance compared with those in which no constrains were imposed to the regional circulation within the model domain. Overall, no single experiment was found to perform the best over the entire domain and during the two contrasting months. The experiment that outperforms depends on the area of interest, being the simulation using the Grell (Kain-Fritsch) cumulus scheme in combination with the MRF planetary boundary layer scheme more adequate for subtropical (tropical) latitudes. The ensemble of the sensitivity experiments showed a better performance compared with any individual experiment.
Modeling to predict pilot performance during CDTI-based in-trail following experiments
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Goka, T.
1984-01-01
A mathematical model was developed of the flight system with the pilot using a cockpit display of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. Both in-trail and vertical dynamics were included. The nominal spacing was based on one of three criteria (Constant Time Predictor; Constant Time Delay; or Acceleration Cue). This model was used to simulate digitally the dynamics of a string of multiple following aircraft, including response to initial position errors. The simulation was used to predict the outcome of a series of in-trail following experiments, including pilot performance in maintaining correct longitudinal spacing and vertical position. The experiments were run in the NASA Ames Research Center multi-cab cockpit simulator facility. The experimental results were then used to evaluate the model and its prediction accuracy. Model parameters were adjusted, so that modeled performance matched experimental results. Lessons learned in this modeling and prediction study are summarized.
NASA Astrophysics Data System (ADS)
De Kauwe, M. G.; Medlyn, B.; Walker, A.; Zaehle, S.; Pendall, E.; Norby, R. J.
2017-12-01
Multifactor experiments are often advocated as important for advancing models, yet to date, such models have only been tested against single-factor experiments. We applied 10 models to the multifactor Prairie Heating and CO2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multifactor experiments can be used to constrain models and to identify a road map for model improvement. We found models performed poorly in ambient conditions: comparison with data highlighted model failures particularly with respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against the observations from single-factors treatments was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they overestimated the effect of warming on leaf onset and did not allow CO2-induced water savings to extend the growing season length. Observed interactive (CO2 × warming) treatment effects were subtle and contingent on water stress, phenology, and species composition. As the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. We outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.
The effect of air entrapment on the performance of squeeze film dampers: Experiments and analysis
NASA Astrophysics Data System (ADS)
Diaz Briceno, Sergio Enrique
Squeeze film dampers (SFDs) are an effective means to introduce the required damping in rotor-bearing systems. They are a standard application in jet engines and are commonly used in industrial compressors. Yet, lack of understanding of their operation has confined the design of SFDs to a costly trial and error process based on prior experience. The main factor deterring the success of analytical models for the prediction of SFDs' performance lays on the modeling of the dynamic film rupture. Usually, the cavitation models developed for journal bearings are applied to SFDs. Yet, the characteristic motion of the SFD results in the entrapment of air into the oil film, thus producing a bubbly mixture that can not be represented by these models. In this work, an extensive experimental study establishes qualitatively and---for the first time---quantitatively the differences between operation with vapor cavitation and with air entrainment. The experiments show that most operating conditions lead to air entrainment and demonstrate the paramount effect it has on the performance of SFDs, evidencing the limitation of currently available models. Further experiments address the operation of SFDs with controlled bubbly mixtures. These experiments bolster the possibility of modeling air entrapment by representing the lubricant as a homogeneous mixture of air and oil and provide a reliable data base for benchmarking such a model. An analytical model is developed based on a homogeneous mixture assumption and where the bubbles are described by the Rayleigh-Plesset equation. Good agreement is obtained between this model and the measurements performed in the SFD operating with controlled mixtures. A complementary analytical model is devised to estimate the amount of air entrained from the balance of axial flows in the film. A combination of the analytical models for prediction of the air volume fraction and of the hydrodynamic pressures renders promising results for prediction of the performance of SFDs with freely entrained air. The results of this work are of immediate engineering applicability. Furthermore, they represent a firm step to advance the understanding on the effects of air entrapment in the performance of SFD.
NASA Technical Reports Server (NTRS)
Haywood, A. M.; Dowsett, H. J.; Robinson, M. M.; Stoll, D. K.; Dolan, A. M.; Lunt, D. J.; Otto-Bliesner, B.; Chandler, M. A.
2011-01-01
The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere only climate models. The second (Experiment 2) utilizes fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.
Haywood, A.M.; Dowsett, H.J.; Robinson, M.M.; Stoll, D.K.; Dolan, A.M.; Lunt, D.J.; Otto-Bliesner, B.; Chandler, M.A.
2011-01-01
The Palaeoclimate Modelling Intercomparison Project has expanded to include a model intercomparison for the mid-Pliocene warm period (3.29 to 2.97 million yr ago). This project is referred to as PlioMIP (the Pliocene Model Intercomparison Project). Two experiments have been agreed upon and together compose the initial phase of PlioMIP. The first (Experiment 1) is being performed with atmosphere-only climate models. The second (Experiment 2) utilises fully coupled ocean-atmosphere climate models. Following on from the publication of the experimental design and boundary conditions for Experiment 1 in Geoscientific Model Development, this paper provides the necessary description of differences and/or additions to the experimental design for Experiment 2.
Anatomical knowledge gain through a clay-modeling exercise compared to live and video observations.
Kooloos, Jan G M; Schepens-Franke, Annelieke N; Bergman, Esther M; Donders, Rogier A R T; Vorstenbosch, Marc A T M
2014-01-01
Clay modeling is increasingly used as a teaching method other than dissection. The haptic experience during clay modeling is supposed to correspond to the learning effect of manipulations during exercises in the dissection room involving tissues and organs. We questioned this assumption in two pretest-post-test experiments. In these experiments, the learning effects of clay modeling were compared to either live observations (Experiment I) or video observations (Experiment II) of the clay-modeling exercise. The effects of learning were measured with multiple choice questions, extended matching questions, and recognition of structures on illustrations of cross-sections. Analysis of covariance with pretest scores as the covariate was used to elaborate the results. Experiment I showed a significantly higher post-test score for the observers, whereas Experiment II showed a significantly higher post-test score for the clay modelers. This study shows that (1) students who perform clay-modeling exercises show less gain in anatomical knowledge than students who attentively observe the same exercise being carried out and (2) performing a clay-modeling exercise is better in anatomical knowledge gain compared to the study of a video of the recorded exercise. The most important learning effect seems to be the engagement in the exercise, focusing attention and stimulating time on task. © 2014 American Association of Anatomists.
Hanley, J Richard; Dell, Gary S; Kay, Janice; Baron, Rachel
2004-03-01
In this paper, we attempt to simulate the picture naming and auditory repetition performance of two patients reported by Hanley, Kay, and Edwards (2002), who were matched for picture naming score but who differed significantly in their ability to repeat familiar words. In Experiment 1, we demonstrate that the model of naming and repetition put forward by Foygel and Dell (2000) is better able to accommodate this pattern of performance than the model put forward by Dell, Schwartz, Martin, Saffran, and Gagnon (1997). Nevertheless, Foygel and Dell's model underpredicted the repetition performance of both patients. In Experiment 2, we attempt to simulate their performance using a new dual route model of repetition in which Foygel and Dell's model is augmented by an additional nonlexical repetition pathway. The new model provided a more accurate fit to the real-word repetition performance of both patients. It is argued that the results provide support for dual route models of auditory repetition.
Mirror neuron system and observational learning: behavioral and neurophysiological evidence.
Lago-Rodriguez, Angel; Lopez-Alonso, Virginia; Fernández-del-Olmo, Miguel
2013-07-01
Three experiments were performed to study observational learning using behavioral, perceptual, and neurophysiological data. Experiment 1 investigated whether observing an execution model, during physical practice of a transitive task that only presented one execution strategy, led to performance improvements compared with physical practice alone. Experiment 2 investigated whether performing an observational learning protocol improves subjects' action perception. In experiment 3 we evaluated whether the type of practice performed determined the activation of the Mirror Neuron System during action observation. Results showed that, compared with physical practice, observing an execution model during a task that only showed one execution strategy does not provide behavioral benefits. However, an observational learning protocol allows subjects to predict more precisely the outcome of the learned task. Finally, intersperse observation of an execution model with physical practice results in changes of primary motor cortex activity during the observation of the motor pattern previously practiced, whereas modulations in the connectivity between primary and non primary motor areas (PMv-M1; PPC-M1) were not affected by the practice protocol performed by the observer. Copyright © 2013 Elsevier B.V. All rights reserved.
Development of an algorithm to model an aircraft equipped with a generic CDTI display
NASA Technical Reports Server (NTRS)
Driscoll, W. C.; Houck, J. A.
1986-01-01
A model of human pilot performance of a tracking task using a generic Cockpit Display of Traffic Information (CDTI) display is developed from experimental data. The tracking task is to use CDTI in tracking a leading aircraft at a nominal separation of three nautical miles over a prescribed trajectory in space. The analysis of the data resulting from a factorial design of experiments reveals that the tracking task performance depends on the pilot and his experience at performing the task. Performance was not strongly affected by the type of control system used (velocity vector control wheel steering versus 3D automatic flight path guidance and control). The model that is developed and verified results in state trajectories whose difference from the experimental state trajectories is small compared to the variation due to the pilot and experience factors.
The Audience Performs: A Phenomenological Model for Criticism of Oral Interpretation Performance.
ERIC Educational Resources Information Center
Langellier, Kristin M.
Richard Lanigan's phenomenology of human communication is applicable to the development of a model for critiquing oral interpretation performance. This phenomenological model takes conscious experience of the relationship of a person and the lived-world as its data base, and assumes a phenomenology of performance which creates text in the triadic…
NASA Astrophysics Data System (ADS)
Volk, Brent L.; Lagoudas, Dimitris C.; Maitland, Duncan J.
2011-09-01
In this work, tensile tests and one-dimensional constitutive modeling were performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigated the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles were performed during each test. The material was observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5-4.2 MPa was observed for the constrained displacement recovery experiments. After the experiments were performed, the Chen and Lagoudas model was used to simulate and predict the experimental results. The material properties used in the constitutive model—namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction—were calibrated from a single 10% extension free recovery experiment. The model was then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data.
Stratospheric General Circulation with Chemistry Model (SGCCM)
NASA Technical Reports Server (NTRS)
Rood, Richard B.; Douglass, Anne R.; Geller, Marvin A.; Kaye, Jack A.; Nielsen, J. Eric; Rosenfield, Joan E.; Stolarski, Richard S.
1990-01-01
In the past two years constituent transport and chemistry experiments have been performed using both simple single constituent models and more complex reservoir species models. Winds for these experiments have been taken from the data assimilation effort, Stratospheric Data Analysis System (STRATAN).
Becker, Suzanna; Lim, Jean
2003-08-15
Several decades of research into the function of the frontal lobes in brain-damaged patients, and more recently in intact individuals using function brain imaging, has delineated the complex executive functions of the frontal cortex. And yet, the mechanisms by which the brain achieves these functions remain poorly understood. Here, we present a computational model of the role of the prefrontal cortex (PFC) in controlled memory use that may help to shed light on the mechanisms underlying one aspect of frontal control: the development and deployment of recall strategies. The model accounts for interactions between the PFC and medial temporal lobe in strategic memory use. The PFC self-organizes its own mnemonic codes using internally derived performance measures. These mnemonic codes serve as retrieval cues by biasing retrieval in the medial temporal lobe memory system. We present data from three simulation experiments that demonstrate strategic encoding and retrieval in the free recall of categorized lists of words. Experiment 1 compares the performance of the model with two control networks to evaluate the contribution of various components of the model. Experiment 2 compares the performance of normal and frontally lesioned models to data from several studies using frontally intact and frontally lesioned individuals, as well as normal, healthy individuals under conditions of divided attention. Experiment 3 compares the model's performance on the recall of blocked and unblocked categorized lists of words to data from Stuss et al. (1994) for individuals with control and frontal lobe lesions. Overall, our model captures a number of aspects of human performance on free recall tasks: an increase in total words recalled and in semantic clustering scores across trials, superiority on blocked lists of related items compared to unblocked lists of related items, and similar patterns of performance across trials in the normal and frontally lesioned models, with poorer overall performance of the lesioned models on all measures. The model also has a number of shortcomings, in light of which we suggest extensions to the model that would enable more sophisticated forms of strategic control.
Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J
2011-01-01
In this work, tensile tests and one-dimensional constitutive modeling are performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigate the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles are performed during each test. The material is observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5 MPa to 4.2 MPa is observed for the constrained displacement recovery experiments. After performing the experiments, the Chen and Lagoudas model is used to simulate and predict the experimental results. The material properties used in the constitutive model – namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction – are calibrated from a single 10% extension free recovery experiment. The model is then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data. PMID:22003272
Mechanical Behavior of a Low-Cost Ti-6Al-4V Alloy
NASA Astrophysics Data System (ADS)
Casem, D. T.; Weerasooriya, T.; Walter, T. R.
2018-01-01
Mechanical compression tests were performed on an economical Ti-6Al-4V alloy over a range of strain-rates and temperatures. Low rate experiments (0.001-0.1/s) were performed with a servo-hydraulic load frame and high rate experiments (1000-80,000/s) were performed with the Kolsky bar (Split Hopkinson pressure bar). Emphasis is placed on the large strain, high-rate, and high temperature behavior of the material in an effort to develop a predictive capability for adiabatic shear bands. Quasi-isothermal experiments were performed with the Kolsky bar to determine the large strain response at elevated rates, and bars with small diameters (1.59 mm and 794 µm, instrumented optically) were used to study the response at the higher strain-rates. Experiments were also conducted at temperatures ranging from 81 to 673 K. Two constitutive models are used to represent the data. The first is the Zerilli-Armstrong recovery strain model and the second is a modified Johnson-Cook model which uses the recovery strain term from the Zerilli-Armstrong model. In both cases, the recovery strain feature is critical for capturing the instability that precedes localization.
Music experience influences laparoscopic skills performance.
Boyd, Tanner; Jung, Inkyung; Van Sickle, Kent; Schwesinger, Wayne; Michalek, Joel; Bingener, Juliane
2008-01-01
Music education affects the mathematical and visuo-spatial skills of school-age children. Visuo-spatial abilities have a significant effect on laparoscopic suturing performance. We hypothesize that prior music experience influences the performance of laparoscopic suturing tasks. Thirty novices observed a laparoscopic suturing task video. Each performed 3 timed suturing task trials. Demographics were recorded. A repeated measures linear mixed model was used to examine the effects of prior music experience on suturing task time. Twelve women and 18 men completed the tasks. When adjusted for video game experience, participants who currently played an instrument performed significantly faster than those who did not (P<0.001). The model showed a significant sex by instrument interaction. Men who had never played an instrument or were currently playing an instrument performed better than women in the same group (P=0.002 and P<0.001). There was no sex difference in the performance of participants who had played an instrument in the past (P=0.29). This study attempted to investigate the effect of music experience on the laparoscopic suturing abilities of surgical novices. The visuo-spatial abilities used in laparoscopic suturing may be enhanced in those involved in playing an instrument.
Virtual reality simulators: valuable surgical skills trainers or video games?
Willis, Ross E; Gomez, Pedro Pablo; Ivatury, Srinivas J; Mitra, Hari S; Van Sickle, Kent R
2014-01-01
Virtual reality (VR) and physical model (PM) simulators differ in terms of whether the trainee is manipulating actual 3-dimensional objects (PM) or computer-generated 3-dimensional objects (VR). Much like video games (VG), VR simulators utilize computer-generated graphics. These differences may have profound effects on the utility of VR and PM training platforms. In this study, we aimed to determine whether a relationship exists between VR, PM, and VG platforms. VR and PM simulators for laparoscopic camera navigation ([LCN], experiment 1) and flexible endoscopy ([FE] experiment 2) were used in this study. In experiment 1, 20 laparoscopic novices played VG and performed 0° and 30° LCN exercises on VR and PM simulators. In experiment 2, 20 FE novices played VG and performed colonoscopy exercises on VR and PM simulators. In both experiments, VG performance was correlated with VR performance but not with PM performance. Performance on VR simulators did not correlate with performance on respective PM models. VR environments may be more like VG than previously thought. © 2013 Published by Association of Program Directors in Surgery on behalf of Association of Program Directors in Surgery.
Error Detection Processes during Observational Learning
ERIC Educational Resources Information Center
Badets, Arnaud; Blandin, Yannick; Wright, David L.; Shea, Charles H.
2006-01-01
The purpose of this experiment was to determine whether a faded knowledge of results (KR) frequency during observation of a model's performance enhanced error detection capabilities. During the observation phase, participants observed a model performing a timing task and received KR about the model's performance on each trial or on one of two…
Modeling the effects of contrast enhancement on target acquisition performance
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Fanning, Jonathan D.
2008-04-01
Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content, by better utilizing the available gray levels either globally or locally. This paper assesses the range-performance effects of various contrast enhancement algorithms for target identification with well contrasted vehicles. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing linearly scaled images and various contrast enhancement processed images. Contrast enhancement is modeled in the US Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of feature saturation or enhancement. To account for the equivalent blur associated with each contrast enhancement algorithm, an additional effective MTF was calculated and added to the model. The measured results are compared with the predicted performance based on the target task difficulty metric used in NVThermIP.
Reading While Listening: A Linear Model of Selective Attention
ERIC Educational Resources Information Center
Martin, Maryanne
1977-01-01
Two experiments are described. One measured performance of subjects on pairs of concurrent verbal tasks, monitoring sentences for certain items while reading. Secondary task performance combined with a primary task is proportional to its performance in isolation. The second experiment checked certain results of the first. (CHK)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goda, Joetta Marie; Miller, Thomas; Grogan, Brandon
2016-10-26
This document contains figures that will be included in an ORNL final report that details computational efforts to model an irradiation experiment performed on the Godiva IV critical assembly. This experiment was a collaboration between LANL and ORNL.
Robust optical flow using adaptive Lorentzian filter for image reconstruction under noisy condition
NASA Astrophysics Data System (ADS)
Kesrarat, Darun; Patanavijit, Vorapoj
2017-02-01
In optical flow for motion allocation, the efficient result in Motion Vector (MV) is an important issue. Several noisy conditions may cause the unreliable result in optical flow algorithms. We discover that many classical optical flows algorithms perform better result under noisy condition when combined with modern optimized model. This paper introduces effective robust models of optical flow by using Robust high reliability spatial based optical flow algorithms using the adaptive Lorentzian norm influence function in computation on simple spatial temporal optical flows algorithm. Experiment on our proposed models confirm better noise tolerance in optical flow's MV under noisy condition when they are applied over simple spatial temporal optical flow algorithms as a filtering model in simple frame-to-frame correlation technique. We illustrate the performance of our models by performing an experiment on several typical sequences with differences in movement speed of foreground and background where the experiment sequences are contaminated by the additive white Gaussian noise (AWGN) at different noise decibels (dB). This paper shows very high effectiveness of noise tolerance models that they are indicated by peak signal to noise ratio (PSNR).
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDeavitt, Sean; Shao, Lin; Tsvetkov, Pavel
2014-04-07
Advanced fast reactor systems being developed under the DOE's Advanced Fuel Cycle Initiative are designed to destroy TRU isotopes generated in existing and future nuclear energy systems. Over the past 40 years, multiple experiments and demonstrations have been completed using U-Zr, U-Pu-Zr, U-Mo and other metal alloys. As a result, multiple empirical and semi-empirical relationships have been established to develop empirical performance modeling codes. Many mechanistic questions about fission as mobility, bubble coalescience, and gas release have been answered through industrial experience, research, and empirical understanding. The advent of modern computational materials science, however, opens new doors of development suchmore » that physics-based multi-scale models may be developed to enable a new generation of predictive fuel performance codes that are not limited by empiricism.« less
De Kauwe, Martin G; Medlyn, Belinda E; Walker, Anthony P; Zaehle, Sönke; Asao, Shinichi; Guenet, Bertrand; Harper, Anna B; Hickler, Thomas; Jain, Atul K; Luo, Yiqi; Lu, Xingjie; Luus, Kristina; Parton, William J; Shu, Shijie; Wang, Ying-Ping; Werner, Christian; Xia, Jianyang; Pendall, Elise; Morgan, Jack A; Ryan, Edmund M; Carrillo, Yolima; Dijkstra, Feike A; Zelikova, Tamara J; Norby, Richard J
2017-09-01
Multifactor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date, such models have only been tested against single-factor experiments. We applied 10 TBMs to the multifactor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multifactor experiments can be used to constrain models and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1 ). Comparison with data highlighted model failures particularly with respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against the observations from single-factors treatments was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the N cycle models, N availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they overestimated the effect of warming on leaf onset and did not allow CO 2 -induced water savings to extend the growing season length. Observed interactive (CO 2 × warming) treatment effects were subtle and contingent on water stress, phenology, and species composition. As the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. We outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change. © 2017 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Kauwe, Martin G.; Medlyn, Belinda E.; Walker, Anthony P.
Multi-factor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date such models have only been tested against single-factor experiments. We applied 10 TBMs to the multi-factor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multi-factor experiments can be used to constrain models, and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1). Comparison with data highlighted model failures particularlymore » in respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against single-factors was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they over-estimated the effect of warming on leaf onset and did not allow CO 2-induced water savings to extend growing season length. Observed interactive (CO 2 x warming) treatment effects were subtle and contingent on water stress, phenology and species composition. Since the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. Finally, we outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.« less
De Kauwe, Martin G.; Medlyn, Belinda E.; Walker, Anthony P.; ...
2017-02-01
Multi-factor experiments are often advocated as important for advancing terrestrial biosphere models (TBMs), yet to date such models have only been tested against single-factor experiments. We applied 10 TBMs to the multi-factor Prairie Heating and CO 2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multi-factor experiments can be used to constrain models, and to identify a road map for model improvement. We found models performed poorly in ambient conditions; there was a wide spread in simulated above-ground net primary productivity (range: 31-390 g C m -2 yr -1). Comparison with data highlighted model failures particularlymore » in respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against single-factors was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they over-estimated the effect of warming on leaf onset and did not allow CO 2-induced water savings to extend growing season length. Observed interactive (CO 2 x warming) treatment effects were subtle and contingent on water stress, phenology and species composition. Since the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. Finally, we outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.« less
Wedge Experiment Modeling and Simulation for Reactive Flow Model Calibration
NASA Astrophysics Data System (ADS)
Maestas, Joseph T.; Dorgan, Robert J.; Sutherland, Gerrit T.
2017-06-01
Wedge experiments are a typical method for generating pop-plot data (run-to-detonation distance versus input shock pressure), which is used to assess an explosive material's initiation behavior. Such data can be utilized to calibrate reactive flow models by running hydrocode simulations and successively tweaking model parameters until a match between experiment is achieved. Typical simulations are performed in 1D and typically use a flyer impact to achieve the prescribed shock loading pressure. In this effort, a wedge experiment performed at the Army Research Lab (ARL) was modeled using CTH (SNL hydrocode) in 1D, 2D, and 3D space in order to determine if there was any justification in using simplified models. A simulation was also performed using the BCAT code (CTH companion tool) that assumes a plate impact shock loading. Results from the simulations were compared to experimental data and show that the shock imparted into an explosive specimen is accurately captured with 2D and 3D simulations, but changes significantly in 1D space and with the BCAT tool. The difference in shock profile is shown to only affect numerical predictions for large run distances. This is attributed to incorrectly capturing the energy fluence for detonation waves versus flat shock loading. Portions of this work were funded through the Joint Insensitive Munitions Technology Program.
Tenório, Thyago; Bittencourt, Ig Ibert; Isotani, Seiji; Pedro, Alan; Ospina, Patrícia; Tenório, Daniel
2017-06-01
In this dataset, we present the collected data of two experiments with the application of the gamified peer assessment model into online learning environment MeuTutor to allow the comparison of the obtained results with others proposed models. MeuTutor is an intelligent tutoring system aims to monitor the learning of the students in a personalized way, ensuring quality education and improving the performance of its members (Tenório et al., 2016) [1]. The first experiment evaluated the effectiveness of the peer assessment model through metrics as final grade (result), time to correct the activities and associated costs. The second experiment evaluated the gamification influence into peer assessment model, analyzing metrics as access number (logins), number of performed activities and number of performed corrections. In this article, we present in table form for each metric: the raw data of each treatment; the summarized data; the application results of the normality test Shapiro-Wilk; the application results of the statistical tests T -Test and/or Wilcoxon. The presented data in this article are related to the article entitled "A gamified peer assessment model for on-line learning environments in a competitive context" (Tenório et al., 2016) [1].
Detonation failure characterization of non-ideal explosives
NASA Astrophysics Data System (ADS)
Janesheski, Robert S.; Groven, Lori J.; Son, Steven
2012-03-01
Non-ideal explosives are currently poorly characterized, hence limiting the modeling of them. Current characterization requires large-scale testing to obtain steady detonation wave characterization for analysis due to the relatively thick reaction zones. Use of a microwave interferometer applied to small-scale confined transient experiments is being implemented to allow for time resolved characterization of a failing detonation. The microwave interferometer measures the position of a failing detonation wave in a tube that is initiated with a booster charge. Experiments have been performed with ammonium nitrate and various fuel compositions (diesel fuel and mineral oil). It was observed that the failure dynamics are influenced by factors such as chemical composition and confiner thickness. Future work is planned to calibrate models to these small-scale experiments and eventually validate the models with available large scale experiments. This experiment is shown to be repeatable, shows dependence on reactive properties, and can be performed with little required material.
Implementation of an object oriented track reconstruction model into multiple LHC experiments*
NASA Astrophysics Data System (ADS)
Gaines, Irwin; Gonzalez, Saul; Qian, Sijin
2001-10-01
An Object Oriented (OO) model (Gaines et al., 1996; 1997; Gaines and Qian, 1998; 1999) for track reconstruction by the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. The model has been coded in the C++ programming language and has been successfully implemented into the OO computing environments of both the CMS (1994) and ATLAS (1994) experiments at the future Large Hadron Collider (LHC) at CERN. We shall report: how the OO model was adapted, with largely the same code, to different scenarios and serves the different reconstruction aims in different experiments (i.e. the level-2 trigger software for ATLAS and the offline software for CMS); how the OO model has been incorporated into different OO environments with a similar integration structure (demonstrating the ease of re-use of OO program); what are the OO model's performance, including execution time, memory usage, track finding efficiency and ghost rate, etc.; and additional physics performance based on use of the OO tracking model. We shall also mention the experience and lessons learned from the implementation of the OO model into the general OO software framework of the experiments. In summary, our practice shows that the OO technology really makes the software development and the integration issues straightforward and convenient; this may be particularly beneficial for the general non-computer-professional physicists.
Modeling Enclosure Design in Above-Grade Walls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lstiburek, J.; Ueno, K.; Musunuru, S.
2016-03-01
This report describes the modeling of typical wall assemblies that have performed well historically in various climate zones. The WUFI (Warme und Feuchte instationar) software (Version 5.3) model was used. A library of input data and results are provided. The provided information can be generalized for application to a broad population of houses, within the limits of existing experience. The WUFI software model was calibrated or tuned using wall assemblies with historically successful performance. The primary performance criteria or failure criteria establishing historic performance was moisture content of the exterior sheathing. The primary tuning parameters (simulation inputs) were airflow andmore » specifying appropriate material properties. Rational hygric loads were established based on experience - specifically rain wetting and interior moisture (RH levels). The tuning parameters were limited or bounded by published data or experience. The WUFI templates provided with this report supply useful information resources to new or less-experienced users. The files present various custom settings that will help avoid results that will require overly conservative enclosure assemblies. Overall, better material data, consistent initial assumptions, and consistent inputs among practitioners will improve the quality of WUFI modeling, and improve the level of sophistication in the field.« less
A Robust Adaptive Autonomous Approach to Optimal Experimental Design
NASA Astrophysics Data System (ADS)
Gu, Hairong
Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.
Modeling Simple Driving Tasks with a One-Boundary Diffusion Model
Ratcliff, Roger; Strayer, David
2014-01-01
A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620
Argonne Bubble Experiment Thermal Model Development II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buechler, Cynthia Eileen
2016-07-01
This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at three beam power levels, 6, 12 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was observed. This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiations.more » The previous report described an initial analysis performed on a geometry that had not been updated to reflect the as-built solution vessel. Here, the as-built geometry is used. Monte-Carlo N-Particle (MCNP) calculations were performed on the updated geometry, and these results were used to define the power deposition profile for the CFD analyses, which were performed using Fluent, Ver. 16.2. CFD analyses were performed for the 12 and 15 kW irradiations, and further improvements to the model were incorporated, including the consideration of power deposition in nearby vessel components, gas mixture composition, and bubble size distribution. The temperature results of the CFD calculations are compared to experimental measurements.« less
NASA Technical Reports Server (NTRS)
Phenneger, M. C.; Singhal, S. P.; Lee, T. H.; Stengle, T. H.
1985-01-01
The work performed by the Attitude Determination and Control Section at the National Aeronautics and Space Administration/Goddard Space Flight Center in analyzing and evaluating the performance of infrared horizon sensors is presented. The results of studies performed during the 1960s are reviewed; several models for generating the Earth's infrared radiance profiles are presented; and the Horizon Radiance Modeling Utility, the software used to model the horizon sensor optics and electronics processing to computer radiance-dependent attitude errors, is briefly discussed. Also provided is mission experience from 12 spaceflight missions spanning the period from 1973 to 1984 and using a variety of horizon sensing hardware. Recommendations are presented for future directions for the infrared horizon sensing technology.
NASA Astrophysics Data System (ADS)
Cho, G. S.
2017-09-01
For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.
Haase, S J; Fisk, G
2001-01-01
The present experiments extend the scope of the independent observation model based on signal detection theory (Macmillan & Creelman, 1991) to complex (word) stimulus sets. In the first experiment, the model predicts the relationship between uncertain detection and subsequent correct identification, thereby providing an alternative interpretation to a phenomenon often described as unconscious perception. Our second experiment used an exclusion task (Jacoby, Toth, & Yonelinas, 1993), which, according to theories of unconscious perception, should show qualitative differences in performance based on stimulus detection accuracy and provide a relative measure of conscious versus unconscious influences (Merikle, Joordens, & Stoltz, 1995). Exclusion performance was also explained by the model, suggesting that undetected words did not unconsciously influence identification responses.
NASA Technical Reports Server (NTRS)
Zhang, Qingyuan; Cheng, Yen-Ben; Lyapustin, Alexei I.; Wang, Yujie; Xiao, Xiangming; Suyker, Andrew; Verma, Shashi; Tan, Bin; Middleton, Elizabeth M.
2014-01-01
Accurate estimation of gross primary production (GPP) is essential for carbon cycle and climate change studies. Three AmeriFlux crop sites of maize and soybean were selected for this study. Two of the sites were irrigated and the other one was rainfed. The normalized difference vegetation index (NDVI), the enhanced vegetation index (EVI), the green band chlorophyll index (CIgreen), and the green band wide dynamic range vegetation index (WDRVIgreen) were computed from the moderate resolution imaging spectroradiometer (MODIS) surface reflectance data. We examined the impacts of the MODIS observation footprint and the vegetation bidirectional reflectance distribution function (BRDF) on crop daily GPP estimation with the four spectral vegetation indices (VIs - NDVI, EVI, WDRVIgreen and CIgreen) where GPP was predicted with two linear models, with and without offset: GPP = a × VI × PAR and GPP = a × VI × PAR + b. Model performance was evaluated with coefficient of determination (R2), root mean square error (RMSE), and coefficient of variation (CV). The MODIS data were filtered into four categories and four experiments were conducted to assess the impacts. The first experiment included all observations. The second experiment only included observations with view zenith angle (VZA) = 35? to constrain growth of the footprint size,which achieved a better grid cell match with the agricultural fields. The third experiment included only forward scatter observations with VZA = 35?. The fourth experiment included only backscatter observations with VZA = 35?. Overall, the EVI yielded the most consistently strong relationships to daily GPP under all examined conditions. The model GPP = a × VI × PAR + b had better performance than the model GPP = a × VI × PAR, and the offset was significant for most cases. Better performance was obtained for the irrigated field than its counterpart rainfed field. Comparison of experiment 2 vs. experiment 1 was used to examine the observation footprint impact whereas comparison of experiment 4 vs. experiment 3 was used to examine the BRDF impact. Changes in R2, RMSE,CV and changes in model coefficients "a" and "b" (experiment 2 vs. experiment 1; and experiment 4 vs. experiment 3) were indicators of the impacts. The second experiment produced better performance than the first experiment, increasing R2 (?0.13) and reducing RMSE (?0.68 g C m-2 d-1) and CV (?9%). For each VI, the slope of GPP = a × VI × PAR in the second experiment for each crop type changed little while the slope and intercept of GPP = a × VI × PAR + b varied field by field. The CIgreen was least affected by the MODIS observation footprint in estimating crop daily GPP (R2, ?0.08; RMSE, ?0.42 g C m-2 d-1; and CV, ?7%). Footprint most affected the NDVI (R2, ?0.15; CV, ?10%) and the EVI (RMSE, ?0.84 g C m-2 d-1). The vegetation BRDF impact also caused variation of model performance and change of model coefficients. Significantly different slopes were obtained for forward vs. backscatter observations, especially for the CIgreen and the NDVI. Both the footprint impact and the BRDF impact varied with crop types, irrigation options, model options and VI options.
Time Sharing Between Robotics and Process Control: Validating a Model of Attention Switching.
Wickens, Christopher Dow; Gutzwiller, Robert S; Vieane, Alex; Clegg, Benjamin A; Sebok, Angelia; Janes, Jess
2016-03-01
The aim of this study was to validate the strategic task overload management (STOM) model that predicts task switching when concurrence is impossible. The STOM model predicts that in overload, tasks will be switched to, to the extent that they are attractive on task attributes of high priority, interest, and salience and low difficulty. But more-difficult tasks are less likely to be switched away from once they are being performed. In Experiment 1, participants performed four tasks of the Multi-Attribute Task Battery and provided task-switching data to inform the role of difficulty and priority. In Experiment 2, participants concurrently performed an environmental control task and a robotic arm simulation. Workload was varied by automation of arm movement and both the phases of environmental control and existence of decision support for fault management. Attention to the two tasks was measured using a head tracker. Experiment 1 revealed the lack of influence of task priority and confirmed the differing roles of task difficulty. In Experiment 2, the percentage attention allocation across the eight conditions was predicted by the STOM model when participants rated the four attributes. Model predictions were compared against empirical data and accounted for over 95% of variance in task allocation. More-difficult tasks were performed longer than easier tasks. Task priority does not influence allocation. The multiattribute decision model provided a good fit to the data. The STOM model is useful for predicting cognitive tunneling given that human-in-the-loop simulation is time-consuming and expensive. © 2016, Human Factors and Ergonomics Society.
NASA Astrophysics Data System (ADS)
Cooke, M. L.
2015-12-01
Accretionary sandbox experiments provide a rich environment for investigating the processes of fault development. These experiments engage students because 1) they enable direct observation of fault growth, which is impossible in the crust (type 1 physical model), 2) they are not only representational but can also be manipulated (type 2 physical model), 3) they can be used to test hypotheses (type 3 physical model) and 4) they resemble experiments performed by structural geology researchers around the world. The structural geology courses at UMass Amherst utilize a series of accretionary sandboxes experiments where students first watch a video of an experiment and then perform a group experiment. The experiments motivate discussions of what conditions they would change and what outcomes they would expect from these changes; hypothesis development. These discussions inevitably lead to calculations of the scaling relationships between model and crustal fault growth and provide insight into the crustal processes represented within the dry sand. Sketching of the experiments has been shown to be a very effective assessment method as the students reveal which features they are analyzing. Another approach used at UMass is to set up a forensic experiment. The experiment is set up with spatially varying basal friction before the meeting and students must figure out what the basal conditions are through the experiment. This experiment leads to discussions of equilibrium and force balance within the accretionary wedge. Displacement fields can be captured throughout the experiment using inexpensive digital image correlation techniques to foster quantitative analysis of the experiments.
Pumped storage system model and experimental investigations on S-induced issues during transients
NASA Astrophysics Data System (ADS)
Zeng, Wei; Yang, Jiandong; Hu, Jinhong
2017-06-01
Because of the important role of pumped storage stations in the peak regulation and frequency control of a power grid, pump turbines must rapidly switch between different operating modes, such as fast startup and load rejection. However, pump turbines go through the unstable S region in these transition processes, threatening the security and stability of the pumped storage station. This issue has mainly been investigated through numerical simulations, while field experiments generally involve high risks and are difficult to perform. Therefore, in this work, the model test method was employed to study S-induced security and stability issues for a pumped storage station in transition processes. First, a pumped storage system model was set up, including the piping system, model units, electrical control systems and measurement system. In this model, two pump turbines with different S-shaped characteristics were installed to determine the influence of S-shaped characteristics on transition processes. The model platform can be applied to simulate any hydraulic transition process that occurs in real power stations, such as load rejection, startup, and grid connection. On the experimental platform, the S-shaped characteristic curves were measured to be the basis of other experiments. Runaway experiments were performed to verify the impact of the S-shaped characteristics on the pump turbine runaway stability. Full load rejection tests were performed to validate the effect of the S-shaped characteristics on the water-hammer pressure. The condition of one pump turbine rejecting its load after another defined as one-after-another (OAA) load rejection was performed to validate the possibility of S-induced extreme draft tube pressure. Load rejection experiments with different guide vane closing schemes were performed to determine a suitable scheme to adapt the S-shaped characteristics. Through these experiments, the threats existing in the station were verified, the appropriate measures were summarized, and an important experimental basis for the safe and stable operation of a pumped storage station was provided.
Modeling Enclosure Design in Above-Grade Walls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lstiburek, J.; Ueno, K.; Musunuru, S.
2016-03-01
Building Science Corporation modeled typically well-performing wall assemblies using Wärme und Feuchte instationär (WUFI) Version 5.3 software and demonstrated that these models agree with historic experience when calibrated and modeled correctly. This technical report provides a library of WUFI modeling input data and results. Within the limits of existing experience, this information can be generalized for applications to a broad population of houses.
Interim Service ISDN Satellite (ISIS) network model for advanced satellite designs and experiments
NASA Technical Reports Server (NTRS)
Pepin, Gerard R.; Hager, E. Paul
1991-01-01
The Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) Network Model for Advanced Satellite Designs and Experiments describes a model suitable for discrete event simulations. A top-down model design uses the Advanced Communications Technology Satellite (ACTS) as its basis. The ISDN modeling abstractions are added to permit the determination and performance for the NASA Satellite Communications Research (SCAR) Program.
Modeling of the jack rabbit series of experiments with a temperature based reactive burn model
NASA Astrophysics Data System (ADS)
Desbiens, Nicolas
2017-01-01
The Jack Rabbit experiments, performed by Lawrence Livermore National Laboratory, focus on detonation wave corner turning and shock desensitization. Indeed, while important for safety or charge design, the behaviour of explosives in these regimes is poorly understood. In this paper, our temperature based reactive burn model is calibrated for LX-17 and compared to the Jack Rabbit data. It is shown that our model can reproduce the corner turning and shock desensitization behaviour of four out of the five experiments.
NASA Astrophysics Data System (ADS)
Tomiwa, K. G.
2017-09-01
The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.
Characterizing Detonating LX-17 Charges Crossing a Transverse Air Gap with Experiments and Modeling
NASA Astrophysics Data System (ADS)
Lauderbach, Lisa M.; Souers, P. Clark; Garcia, Frank; Vitello, Peter; Vandersall, Kevin S.
2009-06-01
Experiments were performed using detonating LX-17 (92.5% TATB, 7.5% Kel-f by weight) charges with various width transverse air gaps both with and without manganin peizoresistive in-situ gauges present. The experiments, performed with 25 mm diameter by 25 mm long LX-17 pellets with the transverse air gap in between, showed that transverse gaps up to about 3 mm could be present without causing the detonation wave to fail to continue as a detonation. A JWL++/Tarantula code was utilized to model the results and compare with the in-situ gauge records with reasonable agreement to the experimental data. This work will present the experimental details as well as comparison to the model results. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Reed, Phil
2011-03-01
Binge eating is often associated with stress-induced disruption of typical eating patterns. Three experiments were performed with the aim of developing a potential model for this effect by investigating the effect of presenting response-independent stimuli on rats' lever-pressing for food reinforcement during both fixed-interval (FI) and fixed-ratio (FR) schedules of reinforcement. In Experiment 1, a response-independent brief tone (500-ms, 105-dB, broadband, noisy signal, ranging up to 16 kHz, with spectral peaks at 3 and 500 Hz) disrupted the performance on an FI 60-s schedule. Responding with the response-independent tone was more vigorous than in the absence of the tone. This effect was replicated in Experiment 2 using a within-subject design, but no such effect was noted when a light was employed as a disrupter. In Experiment 3, a 500-ms tone, but not a light, had a similar effect on rats' performance on FR schedules. This tone-induced effect may represent a release from response-inhibition produced by an aversive event. The implications of these results for modeling binge eating are discussed.
Argonne Bubble Experiment Thermal Model Development III
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buechler, Cynthia Eileen
This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development” and “Argonne Bubble Experiment Thermal Model Development II”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at beam power levels between 6 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was recorded. The previous report2 described the Monte-Carlo N-Particle (MCNP) calculations and Computational Fluid Dynamics (CFD) analysis performed on the as-built solution vesselmore » geometry. The CFD simulations in the current analysis were performed using Ansys Fluent, Ver. 17.2. The same power profiles determined from MCNP calculations in earlier work were used for the 12 and 15 kW simulations. The primary goal of the current work is to calculate the temperature profiles for the 12 and 15 kW cases using reasonable estimates for the gas generation rate, based on images of the bubbles recorded during the irradiations. Temperature profiles resulting from the CFD calculations are compared to experimental measurements.« less
Luria-Delbrück, revisited: the classic experiment does not rule out Lamarckian evolution
NASA Astrophysics Data System (ADS)
Holmes, Caroline M.; Ghafari, Mahan; Abbas, Anzar; Saravanan, Varun; Nemenman, Ilya
2017-10-01
We re-examined data from the classic Luria-Delbrück fluctuation experiment, which is often credited with establishing a Darwinian basis for evolution. We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms (as would happen for bacteria with CRISPR-Cas immunity). Analysis of the combined model was not performed in the original 1943 paper. The Luria-Delbrück paper also did not consider the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that the Luria-Delbrück experiment, indeed, favors the Darwinian evolution over purely Lamarckian. However, our analysis does not rule out the combined model, and hence cannot rule out Lamarckian contributions to the evolutionary dynamics.
Luria-Delbrück, revisited: the classic experiment does not rule out Lamarckian evolution.
Holmes, Caroline M; Ghafari, Mahan; Abbas, Anzar; Saravanan, Varun; Nemenman, Ilya
2017-08-21
We re-examined data from the classic Luria-Delbrück fluctuation experiment, which is often credited with establishing a Darwinian basis for evolution. We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms (as would happen for bacteria with CRISPR-Cas immunity). Analysis of the combined model was not performed in the original 1943 paper. The Luria-Delbrück paper also did not consider the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that the Luria-Delbrück experiment, indeed, favors the Darwinian evolution over purely Lamarckian. However, our analysis does not rule out the combined model, and hence cannot rule out Lamarckian contributions to the evolutionary dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Emily; Ramirez, Emilio; Ruggles, Art E.
The modeling capability for tubes with twisted tape inserts is reviewed with reference to the application of cooling plasma facing components in magnetic confinement fusion devices. The history of experiments examining the cooling performance of tubes with twisted tape inserts is reviewed with emphasis on the manner of heating, flow stability limits and the details of the test section and fluid delivery system. Models for heat transfer, burnout, and onset of net vapor generation in straight tube flows and tube with twisted tape are compared. As a result, the gaps in knowledge required to establish performance limits of the plasmamore » facing components are identified and attributes of an experiment to close those gaps are presented.« less
Clark, Emily; Ramirez, Emilio; Ruggles, Art E.; ...
2015-08-18
The modeling capability for tubes with twisted tape inserts is reviewed with reference to the application of cooling plasma facing components in magnetic confinement fusion devices. The history of experiments examining the cooling performance of tubes with twisted tape inserts is reviewed with emphasis on the manner of heating, flow stability limits and the details of the test section and fluid delivery system. Models for heat transfer, burnout, and onset of net vapor generation in straight tube flows and tube with twisted tape are compared. As a result, the gaps in knowledge required to establish performance limits of the plasmamore » facing components are identified and attributes of an experiment to close those gaps are presented.« less
Experiments on Competence and Performance.
ERIC Educational Resources Information Center
Ladefoged, Peter; Fromkin, V.A.
1968-01-01
The paper discusses some important distinctions between linguistic competence and linguistic performance. It is the authors' contention that the distinction between the two must be maintained in experimental linguistics, or else inadequate models result. Three experiments are described. In the first, subjects pronounce nonsense words and the…
Yang, Tao; Sezer, Hayri; Celik, Ismail B.; ...
2015-06-02
In the present paper, a physics-based procedure combining experiments and multi-physics numerical simulations is developed for overall analysis of SOFCs operational diagnostics and performance predictions. In this procedure, essential information for the fuel cell is extracted first by utilizing empirical polarization analysis in conjunction with experiments and refined by multi-physics numerical simulations via simultaneous analysis and calibration of polarization curve and impedance behavior. The performance at different utilization cases and operating currents is also predicted to confirm the accuracy of the proposed model. It is demonstrated that, with the present electrochemical model, three air/fuel flow conditions are needed to producemore » a set of complete data for better understanding of the processes occurring within SOFCs. After calibration against button cell experiments, the methodology can be used to assess performance of planar cell without further calibration. The proposed methodology would accelerate the calibration process and improve the efficiency of design and diagnostics.« less
Identification of ground targets from airborne platforms
NASA Astrophysics Data System (ADS)
Doe, Josh; Boettcher, Evelyn; Miller, Brian
2009-05-01
The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) sensor performance models predict the ability of soldiers to perform a specified military discrimination task using an EO/IR sensor system. Increasingly EO/IR systems are being used on manned and un-manned aircraft for surveillance and target acquisition tasks. In response to this emerging requirement, the NVESD Modeling and Simulation division has been tasked to compare target identification performance between ground-to-ground and air-to-ground platforms for both IR and visible spectra for a set of wheeled utility vehicles. To measure performance, several forced choice experiments were designed and administered and the results analyzed. This paper describes these experiments and reports the results as well as the NVTherm model calibration factors derived for the infrared imagery.
Collective Behavior of Brain Tumor Cells: the Role of Hypoxia
NASA Astrophysics Data System (ADS)
Khain, Evgeniy; Katakowski, Mark; Hopkins, Scott; Szalad, Alexandra; Zheng, Xuguang; Jiang, Feng; Chopp, Michael
2013-03-01
We consider emergent collective behavior of a multicellular biological system. Specifically we investigate the role of hypoxia (lack of oxygen) in migration of brain tumor cells. We performed two series of cell migration experiments. The first set of experiments was performed in a typical wound healing geometry: cells were placed on a substrate, and a scratch was done. In the second set of experiments, cell migration away from a tumor spheroid was investigated. Experiments show a controversy: cells under normal and hypoxic conditions have migrated the same distance in the ``spheroid'' experiment, while in the ``scratch'' experiment cells under normal conditions migrated much faster than under hypoxic conditions. To explain this paradox, we formulate a discrete stochastic model for cell dynamics. The theoretical model explains our experimental observations and suggests that hypoxia decreases both the motility of cells and the strength of cell-cell adhesion. The theoretical predictions were further verified in independent experiments.
EURODELTA-Trends, a multi-model experiment of air quality hindcast in Europe over 1990-2010
NASA Astrophysics Data System (ADS)
Colette, Augustin; Andersson, Camilla; Manders, Astrid; Mar, Kathleen; Mircea, Mihaela; Pay, Maria-Teresa; Raffort, Valentin; Tsyro, Svetlana; Cuvelier, Cornelius; Adani, Mario; Bessagnet, Bertrand; Bergström, Robert; Briganti, Gino; Butler, Tim; Cappelletti, Andrea; Couvidat, Florian; D'Isidoro, Massimo; Doumbia, Thierno; Fagerli, Hilde; Granier, Claire; Heyes, Chris; Klimont, Zig; Ojha, Narendra; Otero, Noelia; Schaap, Martijn; Sindelarova, Katarina; Stegehuis, Annemiek I.; Roustan, Yelva; Vautard, Robert; van Meijgaard, Erik; Garcia Vivanco, Marta; Wind, Peter
2017-09-01
The EURODELTA-Trends multi-model chemistry-transport experiment has been designed to facilitate a better understanding of the evolution of air pollution and its drivers for the period 1990-2010 in Europe. The main objective of the experiment is to assess the efficiency of air pollutant emissions mitigation measures in improving regional-scale air quality. The present paper formulates the main scientific questions and policy issues being addressed by the EURODELTA-Trends modelling experiment with an emphasis on how the design and technical features of the modelling experiment answer these questions. The experiment is designed in three tiers, with increasing degrees of computational demand in order to facilitate the participation of as many modelling teams as possible. The basic experiment consists of simulations for the years 1990, 2000, and 2010. Sensitivity analysis for the same three years using various combinations of (i) anthropogenic emissions, (ii) chemical boundary conditions, and (iii) meteorology complements it. The most demanding tier consists of two complete time series from 1990 to 2010, simulated using either time-varying emissions for corresponding years or constant emissions. Eight chemistry-transport models have contributed with calculation results to at least one experiment tier, and five models have - to date - completed the full set of simulations (and 21-year trend calculations have been performed by four models). The modelling results are publicly available for further use by the scientific community. The main expected outcomes are (i) an evaluation of the models' performances for the three reference years, (ii) an evaluation of the skill of the models in capturing observed air pollution trends for the 1990-2010 time period, (iii) attribution analyses of the respective role of driving factors (e.g. emissions, boundary conditions, meteorology), (iv) a dataset based on a multi-model approach, to provide more robust model results for use in impact studies related to human health, ecosystem, and radiative forcing.
NASA Astrophysics Data System (ADS)
Blanchard, J. P.; Tesche, F. M.; McConnell, B. W.
1987-09-01
An experiment to determine the interaction of an intense electromagnetic pulse (EMP), such as that produced by a nuclear detonation above the Earth's atmosphere, was performed in March, 1986 at Kirtland Air Force Base near Albuquerque, New Mexico. The results of that experiment have been published without analysis. Following an introduction of the corona phenomenon, the reason for interest in it, and a review of the experiment, this paper discusses five different analytic corona models that may model corona formation on a conducting line subjected to EMP. The results predicted by these models are compared with measured data acquired during the experiment to determine the strengths and weaknesses of each model.
Predicting the Consequences of Workload Management Strategies with Human Performance Modeling
NASA Technical Reports Server (NTRS)
Mitchell, Diane Kuhl; Samma, Charneta
2011-01-01
Human performance modelers at the US Army Research Laboratory have developed an approach for establishing Soldier high workload that can be used for analyses of proposed system designs. Their technique includes three key components. To implement the approach in an experiment, the researcher would create two experimental conditions: a baseline and a design alternative. Next they would identify a scenario in which the test participants perform all their representative concurrent interactions with the system. This scenario should include any events that would trigger a different set of goals for the human operators. They would collect workload values during both the control and alternative design condition to see if the alternative increased workload and decreased performance. They have successfully implemented this approach for military vehicle. designs using the human performance modeling tool, IMPRINT. Although ARL researches use IMPRINT to implement their approach, it can be applied to any workload analysis. Researchers using other modeling and simulations tools or conducting experiments or field tests can use the same approach.
Spatial frequency dependence of target signature for infrared performance modeling
NASA Astrophysics Data System (ADS)
Du Bosq, Todd; Olson, Jeffrey
2011-05-01
The standard model used to describe the performance of infrared imagers is the U.S. Army imaging system target acquisition model, based on the targeting task performance metric. The model is characterized by the resolution and sensitivity of the sensor as well as the contrast and task difficulty of the target set. The contrast of the target is defined as a spatial average contrast. The model treats the contrast of the target set as spatially white, or constant, over the bandlimit of the sensor. Previous experiments have shown that this assumption is valid under normal conditions and typical target sets. However, outside of these conditions, the treatment of target signature can become the limiting factor affecting model performance accuracy. This paper examines target signature more carefully. The spatial frequency dependence of the standard U.S. Army RDECOM CERDEC Night Vision 12 and 8 tracked vehicle target sets is described. The results of human perception experiments are modeled and evaluated using both frequency dependent and independent target signature definitions. Finally the function of task difficulty and its relationship to a target set is discussed.
Advances in Experiment Design for High Performance Aircraft
NASA Technical Reports Server (NTRS)
Morelli, Engene A.
1998-01-01
A general overview and summary of recent advances in experiment design for high performance aircraft is presented, along with results from flight tests. General theoretical background is included, with some discussion of various approaches to maneuver design. Flight test examples from the F-18 High Alpha Research Vehicle (HARV) are used to illustrate applications of the theory. Input forms are compared using Cramer-Rao bounds for the standard errors of estimated model parameters. Directions for future research in experiment design for high performance aircraft are identified.
A compendium of millimeter wave propagation studies performed by NASA
NASA Technical Reports Server (NTRS)
Kaul, R.; Rogers, D.; Bremer, J.
1977-01-01
Key millimeter wave propagation experiments and analytical results were summarized. The experiments were performed with the Ats-5, Ats-6 and Comstar satellites, radars, radiometers and rain gage networks. Analytic models were developed for extrapolation of experimental results to frequencies, locations, and communications systems.
Exercise Performance and Corticospinal Excitability during Action Observation
Wrightson, James G.; Twomey, Rosie; Smeeton, Nicholas J.
2016-01-01
Purpose: Observation of a model performing fast exercise improves simultaneous exercise performance; however, the precise mechanism underpinning this effect is unknown. The aim of the present study was to investigate whether the speed of the observed exercise influenced both upper body exercise performance and the activation of a cortical action observation network (AON). Method: In Experiment 1, 10 participants completed a 5 km time trial on an arm-crank ergometer whilst observing a blank screen (no-video) and a model performing exercise at both a typical (i.e., individual mean cadence during baseline time trial) and 15% faster than typical speed. In Experiment 2, 11 participants performed arm crank exercise whilst observing exercise at typical speed, 15% slower and 15% faster than typical speed. In Experiment 3, 11 participants observed the typical, slow and fast exercise, and a no-video, whilst corticospinal excitability was assessed using transcranial magnetic stimulation. Results: In Experiment 1, performance time decreased and mean power increased, during observation of the fast exercise compared to the no-video condition. In Experiment 2, cadence and power increased during observation of the fast exercise compared to the typical speed exercise but there was no effect of observation of slow exercise on exercise behavior. In Experiment 3, observation of exercise increased corticospinal excitability; however, there was no difference between the exercise speeds. Conclusion: Observation of fast exercise improves simultaneous upper-body exercise performance. However, because there was no effect of exercise speed on corticospinal excitability, these results suggest that these improvements are not solely due to changes in the activity of the AON. PMID:27014037
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobrov, A. A.; Boyarinov, V. F.; Glushkov, A. E.
2012-07-01
Results of critical experiments performed at five ASTRA facility configurations modeling the high-temperature helium-cooled graphite-moderated reactors are presented. Results of experiments on definition of space distribution of {sup 235}U fission reaction rate performed at four from these five configurations are presented more detail. Analysis of available information showed that all experiments on criticality at these five configurations are acceptable for use them as critical benchmark experiments. All experiments on definition of space distribution of {sup 235}U fission reaction rate are acceptable for use them as physical benchmark experiments. (authors)
A Didactic Experiment and Model of a Flat-Plate Solar Collector
ERIC Educational Resources Information Center
Gallitto, Aurelio Agliolo; Fiordilino, Emilio
2011-01-01
We report on an experiment performed with a home-made flat-plate solar collector, carried out together with high-school students. To explain the experimental results, we propose a model that describes the heating process of the solar collector. The model accounts quantitatively for the experimental data. We suggest that solar-energy topics should…
Beyond Performance: A Motivational Experiences Model of Stereotype Threat
Thoman, Dustin B.; Smith, Jessi L.; Brown, Elizabeth R.; Chase, Justin; Lee, Joo Young K.
2013-01-01
The contributing role of stereotype threat (ST) to learning and performance decrements for stigmatized students in highly evaluative situations has been vastly documented and is now widely known by educators and policy makers. However, recent research illustrates that underrepresented and stigmatized students’ academic and career motivations are influenced by ST more broadly, particularly through influences on achievement orientations, sense of belonging, and intrinsic motivation. Such a focus moves conceptualizations of ST effects in education beyond the influence on a student’s performance, skill level, and feelings of self-efficacy per se to experiencing greater belonging uncertainty and lower interest in stereotyped tasks and domains. These negative experiences are associated with important outcomes such as decreased persistence and domain identification, even among students who are high in achievement motivation. In this vein, we present and review support for the Motivational Experience Model of ST, a self-regulatory model framework for integrating research on ST, achievement goals, sense of belonging, and intrinsic motivation to make predictions for how stigmatized students’ motivational experiences are maintained or disrupted, particularly over long periods of time. PMID:23894223
Simulating Visual Attention Allocation of Pilots in an Advanced Cockpit Environment
NASA Technical Reports Server (NTRS)
Frische, F.; Osterloh, J.-P.; Luedtke, A.
2011-01-01
This paper describes the results of experiments conducted with human line pilots and a cognitive pilot model during interaction with a new 40 Flight Management System (FMS). The aim of these experiments was to gather human pilot behavior data in order to calibrate the behavior of the model. Human behavior is mainly triggered by visual perception. Thus, the main aspect was to setup a profile of human pilots' visual attention allocation in a cockpit environment containing the new FMS. We first performed statistical analyses of eye tracker data and then compared our results to common results of familiar analyses in standard cockpit environments. The comparison has shown a significant influence of the new system on the visual performance of human pilots. Further on, analyses of the pilot models' visual performance have been performed. A comparison to human pilots' visual performance revealed important improvement potentials.
some enzymes to trichomonas. The following enzymes were used for experiment: pepsin, trypsin, distaste, urease and lysozyme. Tests were performed...obtained in the experiments with urease . Trichomonas growth under addition of lysozyme was within the range of the control cultures. (Modified author abstract)
Performance-based comparison of neonatal intubation training outcomes: simulator and live animal.
Andreatta, Pamela B; Klotz, Jessica J; Dooley-Hash, Suzanne L; Hauptman, Joe G; Biddinger, Bea; House, Joseph B
2015-02-01
The purpose of this article was to establish psychometric validity evidence for competency assessment instruments and to evaluate the impact of 2 forms of training on the abilities of clinicians to perform neonatal intubation. To inform the development of assessment instruments, we conducted comprehensive task analyses including each performance domain associated with neonatal intubation. Expert review confirmed content validity. Construct validity was established using the instruments to differentiate between the intubation performance abilities of practitioners (N = 294) with variable experience (novice through expert). Training outcomes were evaluated using a quasi-experimental design to evaluate performance differences between 294 subjects randomly assigned to 1 of 2 training groups. The training intervention followed American Heart Association Pediatric Advanced Life Support and Neonatal Resuscitation Program protocols with hands-on practice using either (1) live feline or (2) simulated feline models. Performance assessment data were captured before and directly following the training. All data were analyzed using analysis of variance with repeated measures and statistical significance set at P < .05. Content validity, reliability, and consistency evidence were established for each assessment instrument. Construct validity for each assessment instrument was supported by significantly higher scores for subjects with greater levels of experience, as compared with those with less experience (P = .000). Overall, subjects performed significantly better in each assessment domain, following the training intervention (P = .000). After controlling for experience level, there were no significant differences among the cognitive, performance, and self-efficacy outcomes between clinicians trained with live animal model or simulator model. Analysis of retention scores showed that simulator trained subjects had significantly higher performance scores after 18 weeks (P = .01) and 52 weeks (P = .001) and cognitive scores after 52 weeks (P = .001). The results of this study demonstrate the feasibility of using valid, reliable assessment instruments to assess clinician competency and self-efficacy in the performance of neonatal intubation. We demonstrated the relative equivalency of live animal and simulation-based models as tools to support acquisition of neonatal intubation skills. Retention of performance abilities was greater for subjects trained using the simulator, likely because it afforded greater opportunity for repeated practice. Outcomes in each assessment area were influenced by the previous intubation experience of participants. This suggests that neonatal intubation training programs could be tailored to the level of provider experience to make efficient use of time and educational resources. Future research focusing on the uses of assessment in the applied clinical environment, as well as identification of optimal training cycles for performance retention, is merited.
The effect of data structures on INGRES performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Creighton, J.R.
1987-01-01
Computer experiments were conducted to determine the effect of using Heap, ISAM, Hash and B-tree data structures for INGRES relations. Average times for retrieve, append and update were determined for searches by unique key and non-key data. The experiments were conducted on relations of approximately 1000 tuples of 332 byte width. Multiple operations were performed, where appropriate, to obtain average times. Simple models of the data structures are presented and shown to be consistent with experimental results. The models can be used to predict performance, and to select the appropriate data structure for various applications.
Modeling criterion shifts and target checking in prospective memory monitoring.
Horn, Sebastian S; Bayen, Ute J
2015-01-01
Event-based prospective memory (PM) involves remembering to perform intended actions after a delay. An important theoretical issue is whether and how people monitor the environment to execute an intended action when a target event occurs. Performing a PM task often increases the latencies in ongoing tasks. However, little is known about the reasons for this cost effect. This study uses diffusion model analysis to decompose monitoring processes in the PM paradigm. Across 4 experiments, performing a PM task increased latencies in an ongoing lexical decision task. A large portion of this effect was explained by consistent increases in boundary separation; additional increases in nondecision time emerged in a nonfocal PM task and explained variance in PM performance (Experiment 1), likely reflecting a target-checking strategy before and after the ongoing decision (Experiment 2). However, we found that possible target-checking strategies may depend on task characteristics. That is, instructional emphasis on the importance of ongoing decisions (Experiment 3) or the use of focal targets (Experiment 4) eliminated the contribution of nondecision time to the cost of PM, but left participants in a mode of increased cautiousness. The modeling thus sheds new light on the cost effect seen in many PM studies and suggests that people approach ongoing activities more cautiously when they need to remember an intended action. PsycINFO Database Record (c) 2015 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Cao, Xianzhong; Wang, Feng; Zheng, Zhongmei
The paper reports an educational experiment on the e-Learning instructional design model based on Cognitive Flexibility Theory, the experiment were made to explore the feasibility and effectiveness of the model in promoting the learning quality in ill-structured domain. The study performed the experiment on two groups of students: one group learned through the system designed by the model and the other learned by the traditional method. The results of the experiment indicate that the e-Learning designed through the model is helpful to promote the intrinsic motivation, learning quality in ill-structured domains, ability to resolve ill-structured problem and creative thinking ability of the students.
NASA Astrophysics Data System (ADS)
Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi
2016-08-01
The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.
Capsule modeling of high foot implosion experiments on the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, D. S.; Kritcher, A. L.; Milovich, J. L.
This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less
Capsule modeling of high foot implosion experiments on the National Ignition Facility
Clark, D. S.; Kritcher, A. L.; Milovich, J. L.; ...
2017-03-21
This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less
Text Summarization Model based on Maximum Coverage Problem and its Variant
NASA Astrophysics Data System (ADS)
Takamura, Hiroya; Okumura, Manabu
We discuss text summarization in terms of maximum coverage problem and its variant. To solve the optimization problem, we applied some decoding algorithms including the ones never used in this summarization formulation, such as a greedy algorithm with performance guarantee, a randomized algorithm, and a branch-and-bound method. We conduct comparative experiments. On the basis of the experimental results, we also augment the summarization model so that it takes into account the relevance to the document cluster. Through experiments, we showed that the augmented model is at least comparable to the best-performing method of DUC'04.
Experimentally Modeling Black and White Hole Event Horizons via Fluid Flow
NASA Astrophysics Data System (ADS)
Manheim, Marc E.; Lindner, John F.; Manz, Niklas
We will present a scaled down experiment that hydrodynamically models the interaction between electromagnetic waves and black/white holes. It has been mathematically proven that gravity waves in water can behave analogously to electromagnetic waves traveling through spacetime. In this experiment, gravity waves will be generated in a water tank and propagate in a direction opposed to a flow of varying rate. We observe a noticeable change in the wave's spreading behavior as it travels through the simulated horizon with decreased wave speeds up to standing waves, depending on the opposite flow rate. Such an experiment has already been performed in a 97.2 cubic meter tank. We reduced the size significantly to be able to perform the experiment under normal lab conditions.
NASA Astrophysics Data System (ADS)
Kumkar, Yogesh V.; Sen, P. N.; Chaudhari, Hemankumar S.; Oh, Jai-Ho
2018-02-01
In this paper, an attempt has been made to conduct a numerical experiment with the high-resolution global model GME to predict the tropical storms in the North Indian Ocean during the year 2007. Numerical integrations using the icosahedral hexagonal grid point global model GME were performed to study the evolution of tropical cyclones, viz., Akash, Gonu, Yemyin and Sidr over North Indian Ocean during 2007. It has been seen that the GME model forecast underestimates cyclone's intensity, but the model can capture the evolution of cyclone's intensity especially its weakening during landfall, which is primarily due to the cutoff of the water vapor supply in the boundary layer as cyclones approach the coastal region. A series of numerical simulation of tropical cyclones have been performed with GME to examine model capability in prediction of intensity and track of the cyclones. The model performance is evaluated by calculating the root mean square errors as cyclone track errors.
Jönsson, A; Arvebo, E; Schantz, B
1988-01-01
Experiments with an anthropomorphic dummy for blast research demonstrated that pressures recorded in the lung model of the dummy could be correlated to primary air blast effects on the lungs of experimental animals. The results presented here were obtained with a dummy of the type mentioned above, but with the lung model modified to improve geometric similarity to man. Blast experiments were performed in a shock tube, and impact experiments in a special impact machine. Experiments with nonpenetrating missiles were performed with small-caliber firearms and the dummy protected by body armor. Severity indices derived from the blast experiments were related to established criteria for primary lung injury in man. Impacts delivered in the impact machine and by nonpenetrating missiles are compared. Relationships between severity of impact based on experiments with animals and primary lung injury in man are discussed.
Performance and evaluation of real-time multicomputer control systems
NASA Technical Reports Server (NTRS)
Shin, K. G.
1983-01-01
New performance measures, detailed examples, modeling of error detection process, performance evaluation of rollback recovery methods, experiments on FTMP, and optimal size of an NMR cluster are discussed.
Nebiker, Christian Andreas; Mechera, Robert; Rosenthal, Rachel; Thommen, Sarah; Marti, Walter Richard; von Holzen, Urs; Oertli, Daniel; Vogelbach, Peter
2015-07-01
Laparoscopy has become the gold standard for many abdominal procedures. Among young surgeons, experience in laparoscopic surgery increasingly outweighs experience in open surgery. This study was conducted to compare residents' performance in laparoscopic versus open bench-model task. In an international surgical skills course, we compared trainees' performance in open versus laparoscopic cholecystectomy in a cadaveric animal bench-model. Both exercises were evaluated by board-certified surgeons using an 8-item checklist and by the trainees themselves. 238 trainees with a median surgical experience of 24 months (interquartile range 14-48) took part. Twenty-two percent of the trainees had no previous laparoscopic and 62% no previous open cholecystectomy experience. Significant differences were found in the overall score (median difference of 1 (95% CI: 1, 1), p < 0.001), gallbladder perforation rate (73% vs. 29%, p < 0.001), safe dissection of the Calot's triangle (98% vs. 90%, p = 0.001) and duration of surgery (42 (13) minutes vs. 26 (10) minutes (mean differences 17.22 (95% CI: 15.37, 19.07), p < 0.001)), all favouring open surgery. The perforation rate in open and laparoscopic cholecystectomies was not consistently decreasing with increasing years of experience or number of previously performed procedures. Self-assessment was lower than the assessment by board-certified surgeons. Despite lower experience in open compared to laparoscopic cholecystectomy, better performance was observed in open task. It may be explained by a wider access with easier preparation. Open cholecystectomy is the rescue manoeuvre and therefore, it is important to provide also enough training opportunities in open surgery. Copyright © 2015 IJS Publishing Group Limited. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.
2012-04-01
One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast Testing Experiment in Japan was published on the Earth, Planets and Space Vol. 63, No.3, 2011 on March, 2011. The 2nd part of this issue, which is now on line, will be published soon. An outline of the experiment and activities of the Japanese Testing Center are published on our WEB site; http://wwweic.eri.u-tokyo.ac.jp/ZISINyosoku/wiki.en/wiki.cgi
Three atmospheric dispersion experiments involving oil fog plumes measured by lidar
NASA Technical Reports Server (NTRS)
Eberhard, W. L.; Mcnice, G. T.; Troxel, S. W.
1986-01-01
The Wave Propagation Lab. participated with the U.S. Environmental Protection Agency in a series of experiments with the goal of developing and validating dispersion models that perform substantially better that models currently available. The lidar systems deployed and the data processing procedures used in these experiments are briefly described. Highlights are presented of conclusions drawn thus far from the lidar data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Forrest M; Randerson, Jim; Thornton, Peter E
2009-01-01
The need to capture important climate feebacks in general circulation models (GCMs) has resulted in new efforts to include atmospheric chemistry and land and ocean biogeochemistry into the next generation of production climate models, now often referred to as Earth System Models (ESMs). While many terrestrial and ocean carbon models have been coupled to GCMs, recent work has shown that such models can yield a wide range of results, suggesting that a more rigorous set of offline and partially coupled experiments, along with detailed analyses of processes and comparisons with measurements, are warranted. The Carbon-Land Model Intercomparison Project (C-LAMP) providesmore » a simulation protocol and model performance metrics based upon comparisons against best-available satellite- and ground-based measurements (Hoffman et al., 2007). C-LAMP provides feedback to the modeling community regarding model improvements and to the measurement community by suggesting new observational campaigns. C-LAMP Experiment 1 consists of a set of uncoupled simulations of terrestrial carbon models specifically designed to examine the ability of the models to reproduce surface carbon and energy fluxes at multiple sites and to exhibit the influence of climate variability, prescribed atmospheric carbon dioxide (CO{sub 2}), nitrogen (N) deposition, and land cover change on projections of terrestrial carbon fluxes during the 20th century. Experiment 2 consists of partially coupled simulations of the terrestrial carbon model with an active atmosphere model exchanging energy and moisture fluxes. In all experiments, atmospheric CO{sub 2} follows the prescribed historical trajectory from C{sup 4}MIP. In Experiment 2, the atmosphere model is forced with prescribed sea surface temperatures (SSTs) and corresponding sea ice concentrations from the Hadley Centre; prescribed CO{sub 2} is radiatively active; and land, fossil fuel, and ocean CO{sub 2} fluxes are advected by the model. Both sets of experiments have been performed using two different terrestrial biogeochemistry modules coupled to the Community Land Model version 3 (CLM3) in the Community Climate System Model version 3 (CCSM3): The CASA model of Fung, et al., and the carbon-nitrogen (CN) model of Thornton. Comparisons against Ameriflus site measurements, MODIS satellite observations, NOAA flask records, TRANSCOM inversions, and Free Air CO{sub 2} Enrichment (FACE) site measurements, and other datasets have been performed and are described in Randerson et al. (2009). The C-LAMP diagnostics package was used to validate improvements to CASA and CN for use in the next generation model, CLM4. It is hoped that this effort will serve as a prototype for an international carbon-cycle model benchmarking activity for models being used for the Inter-governmental Panel on Climate Change (IPCC) Fifth Assessment Report. More information about C-LAMP, the experimental protocol, performance metrics, output standards, and model-data comparisons from the CLM3-CASA and CLM3-CN models are available at http://www.climatemodeling.org/c-lamp.« less
Three-dimensional modeling of flow through fractured tuff at Fran Ridge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eaton, R.R.; Ho, C.K.; Glass, RJ.
1996-09-01
Numerical studies have been made of an infiltration experiment at Fran Ridge using the TOUGH2 code to aid in the selection of computational models for performance assessment. The exercise investigates the capabilities of TOUGH2 to model transient flows through highly fractured tuff and provides a possible means of calibration. Two distinctly different conceptual models were used in the TOUGH2 code, the dual permeability model and the equivalent continuum model. The infiltration test modeled involved the infiltration of dyed ponded water for 36 minutes. The 205 gallon infiltration of water observed in the experiment was subsequently modeled using measured Fran Ridgemore » fracture frequencies, and a specified fracture aperture of 285 {micro}m. The dual permeability formulation predicted considerable infiltration along the fracture network, which was in agreement with the experimental observations. As expected, al fracture penetration of the infiltrating water was calculated using the equivalent continuum model, thus demonstrating that this model is not appropriate for modeling the highly transient experiment. It is therefore recommended that the dual permeability model be given priority when computing high-flux infiltration for use in performance assessment studies.« less
Three-dimensional modeling of flow through fractured tuff at Fran Ridge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eaton, R.R.; Ho, C.K.; Glass, R.J.
1996-01-01
Numerical studies have been made of an infiltration experiment at Fran Ridge using the TOUGH2 code to aid in the selection of computational models for performance assessment. The exercise investigates the capabilities of TOUGH2 to model transient flows through highly fractured tuff and provides a possible means of calibration. Two distinctly different conceptual models were used in the TOUGH2 code, the dual permeability model and the equivalent continuum model. The infiltration test modeled involved the infiltration of dyed ponded water for 36 minutes. The 205 gallon filtration of water observed in the experiment was subsequently modeled using measured Fran Ridgemore » fracture frequencies, and a specified fracture aperture of 285 {mu}m. The dual permeability formulation predicted considerable infiltration along the fracture network, which was in agreement with the experimental observations. As expected, minimal fracture penetration of the infiltrating water was calculated using the equivalent continuum model, thus demonstrating that this model is not appropriate for modeling the highly transient experiment. It is therefore recommended that the dual permeability model be given priority when computing high-flux infiltration for use in performance assessment studies.« less
Psychophysical experiments on the PicHunter image retrieval system
NASA Astrophysics Data System (ADS)
Papathomas, Thomas V.; Cox, Ingemar J.; Yianilos, Peter N.; Miller, Matt L.; Minka, Thomas P.; Conway, Tiffany E.; Ghosn, Joumana
2001-01-01
Psychophysical experiments were conducted on PicHunter, a content-based image retrieval (CBIR) experimental prototype with the following properties: (1) Based on a model of how users respond, it uses Bayes's rule to predict what target users want, given their actions. (2) It possesses an extremely simple user interface. (3) It employs an entropy- based scheme to improve convergence. (4) It introduces a paradigm for assessing the performance of CBIR systems. Experiments 1-3 studied human judgment of image similarity to obtain data for the model. Experiment 4 studied the importance of using: (a) semantic information, (b) memory of earlier input, and (c) relative and absolute judgments of similarity. Experiment 5 tested an approach that we propose for comparing performances of CBIR systems objectively. Finally, experiment 6 evaluated the most informative display-updating scheme that is based on entropy minimization, and confirmed earlier simulation results. These experiments represent one of the first attempts to quantify CBIR performance based on psychophysical studies, and they provide valuable data for improving CBIR algorithms. Even though they were designed with PicHunter in mind, their results can be applied to any CBIR system and, more generally, to any system that involves judgment of image similarity by humans.
Tan, Jie; Doing, Georgia; Lewis, Kimberley A; Price, Courtney E; Chen, Kathleen M; Cady, Kyle C; Perchuk, Barret; Laub, Michael T; Hogan, Deborah A; Greene, Casey S
2017-07-26
Cross-experiment comparisons in public data compendia are challenged by unmatched conditions and technical noise. The ADAGE method, which performs unsupervised integration with denoising autoencoder neural networks, can identify biological patterns, but because ADAGE models, like many neural networks, are over-parameterized, different ADAGE models perform equally well. To enhance model robustness and better build signatures consistent with biological pathways, we developed an ensemble ADAGE (eADAGE) that integrated stable signatures across models. We applied eADAGE to a compendium of Pseudomonas aeruginosa gene expression profiling experiments performed in 78 media. eADAGE revealed a phosphate starvation response controlled by PhoB in media with moderate phosphate and predicted that a second stimulus provided by the sensor kinase, KinB, is required for this PhoB activation. We validated this relationship using both targeted and unbiased genetic approaches. eADAGE, which captures stable biological patterns, enables cross-experiment comparisons that can highlight measured but undiscovered relationships. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Exploratory Decision-Making as a Function of Lifelong Experience, Not Cognitive Decline
2016-01-01
Older adults perform worse than younger adults in some complex decision-making scenarios, which is commonly attributed to age-related declines in striatal and frontostriatal processing. Recently, this popular account has been challenged by work that considered how older adults’ performance may differ as a function of greater knowledge and experience, and by work showing that, in some cases, older adults outperform younger adults in complex decision-making tasks. In light of this controversy, we examined the performance of older and younger adults in an exploratory choice task that is amenable to model-based analyses and ostensibly not reliant on prior knowledge. Exploration is a critical aspect of decision-making poorly understood across the life span. Across 2 experiments, we addressed (a) how older and younger adults differ in exploratory choice and (b) to what extent observed differences reflect processing capacity declines. Model-based analyses suggested that the strategies used by the 2 groups were qualitatively different, resulting in relatively worse performance for older adults in 1 decision-making environment but equal performance in another. Little evidence was found that differences in processing capacity drove performance differences. Rather the results suggested that older adults’ performance might result from applying a strategy that may have been shaped by their wealth of real-word decision-making experience. While this strategy is likely to be effective in the real world, it is ill suited to some decision environments. These results underscore the importance of taking into account effects of experience in aging studies, even for tasks that do not obviously tap past experiences. PMID:26726916
Ming, Y; Peiwen, Q
2001-03-01
The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.
Characterizing Detonating LX-17 Charges Crossing a Transverse Air Gap with Experiments and Modeling
NASA Astrophysics Data System (ADS)
Lauderbach, Lisa M.; Souers, P. Clark; Garcia, Frank; Vitello, Peter; Vandersall, Kevin S.
2009-12-01
Experiments were performed using detonating LX-17 (92.5% TATB, 7.5% Kel-F by weight) charges with various width transverse air gaps with manganin peizoresistive in-situ gauges present. The experiments, performed with 25 mm diameter by 25 mm long LX-17 pellets with the transverse air gap in between, showed that transverse gaps up to about 3 mm could be present without causing the detonation wave to fail to continue as a detonation. The Tarantula/JWL++ code was utilized to model the results and compare with the in-situ gauge records with some agreement to the experimental data with additional work needed for a better match to the data. This work will present the experimental details as well as comparison to the model results.
Ion exchange of several radionuclides on the hydrous crystalline silicotitanate, UOP IONSIV IE-911
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huckman, M.E.; Latheef, I.M.; Anthony, R.G.
1999-04-01
The crystalline silicotitanate, UOP IONSIV IE-911, is a proven material for removing radionuclides from a wide variety of waste streams. It is superior for removing several radionuclides from the highly alkaline solutions typical of DOE wastes. This laboratory previously developed an equilibrium model applicable to complex solutions for IE-910 (the power form of the granular IE-911), and more recently, the authors have developed several single component ion-exchange kinetic models for predicting column breakthrough curves and batch reactor concentration histories. In this paper, the authors model ion-exchange column performance using effective diffusivities determined from batch kinetic experiments. This technique is preferablemore » because the batch experiments are easier, faster, and cheaper to perform than column experiments. They also extend these ideas to multicomponent systems. Finally, they evaluate the ability of the equilibrium model to predict data for IE-911.« less
Experimenting from a Distance in the Case of Rutherford Scattering
ERIC Educational Resources Information Center
Grober, S.; Vetter, M.; Eckert, B.; Jodl, H. -J.
2010-01-01
The Rutherford scattering experiment plays a central role in working out atomic models in physics and chemistry. Nevertheless, the experiment is rarely performed at school or in introductory physics courses at university. Therefore, we realized this experiment as a remotely controlled laboratory (RCL), i.e. the experiment is set up in reality and…
Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank
2017-01-31
We present an overview of the coordinated global numerical modelling experiments performed during 2012-2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue.
NASA Astrophysics Data System (ADS)
Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank
2017-01-01
We present an overview of the coordinated global numerical modelling experiments performed during 2012-2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue.
Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank
2018-01-01
We present an overview of the coordinated global numerical modelling experiments performed during 2012–2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue. PMID:29541091
Advanced ISDN satellite designs and experiments
NASA Technical Reports Server (NTRS)
Pepin, Gerard R.
1992-01-01
The research performed by GTE Government Systems and the University of Colorado in support of the NASA Satellite Communications Applications Research (SCAR) Program is summarized. Two levels of research were undertaken. The first dealt with providing interim services Integrated Services Digital Network (ISDN) satellite (ISIS) capabilities that accented basic rate ISDN with a ground control similar to that of the Advanced Communications Technology Satellite (ACTS). The ISIS Network Model development represents satellite systems like the ACTS orbiting switch. The ultimate aim is to move these ACTS ground control functions on-board the next generation of ISDN communications satellite to provide full-service ISDN satellite (FSIS) capabilities. The technical and operational parameters for the advanced ISDN communications satellite design are obtainable from the simulation of ISIS and FSIS engineering software models of the major subsystems of the ISDN communications satellite architecture. Discrete event simulation experiments would generate data for analysis against NASA SCAR performance measure and the data obtained from the ISDN satellite terminal adapter hardware (ISTA) experiments, also developed in the program. The Basic and Option 1 phases of the program are also described and include the following: literature search, traffic mode, network model, scenario specifications, performance measures definitions, hardware experiment design, hardware experiment development, simulator design, and simulator development.
Development of a multicomponent force and moment balance for water tunnel applications, volume 2
NASA Technical Reports Server (NTRS)
Suarez, Carlos J.; Malcolm, Gerald N.; Kramer, Brian R.; Smith, Brooke C.; Ayers, Bert F.
1994-01-01
The principal objective of this research effort was to develop a multicomponent strain gauge balance to measure forces and moments on models tested in flow visualization water tunnels. Static experiments (which are discussed in Volume 1 of this report) were conducted, and the results showed good agreement with wind tunnel data on similar configurations. Dynamic experiments, which are the main topic of this Volume, were also performed using the balance. Delta wing models and two F/A-18 models were utilized in a variety of dynamic tests. This investigation showed that, as expected, the values of the inertial tares are very small due to the low rotating rates required in a low-speed water tunnel and can, therefore, be ignored. Oscillations in pitch, yaw and roll showed hysteresis loops that compared favorably to data from dynamic wind tunnel experiments. Pitch-up and hold maneuvers revealed the long persistence, or time-lags, of some of the force components in response to the motion. Rotary-balance experiments were also successfully performed. The good results obtained in these dynamic experiments bring a whole new dimension to water tunnel testing and emphasize the importance of having the capability to perform simultaneous flow visualization and force/moment measurements during dynamic situations.
Ion thruster performance model
NASA Technical Reports Server (NTRS)
Brophy, J. R.
1984-01-01
A model of ion thruster performance is developed for high flux density, cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature. The model and experiments indicate that thruster performance may be described in terms of only four thruster configuration dependent parameters and two operating parameters. The model also suggests that improved performance should be exhibited by thruster designs which extract a large fraction of the ions produced in the discharge chamber, which have good primary electron and neutral atom containment and which operate at high propellant flow rates.
Analysis of NIF experiments with the minimal energy implosion model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, B., E-mail: bcheng@lanl.gov; Kwan, T. J. T.; Wang, Y. M.
2015-08-15
We apply a recently developed analytical model of implosion and thermonuclear burn to fusion capsule experiments performed at the National Ignition Facility that used low-foot and high-foot laser pulse formats. Our theoretical predictions are consistent with the experimental data. Our studies, together with neutron image analysis, reveal that the adiabats of the cold fuel in both low-foot and high-foot experiments are similar. That is, the cold deuterium-tritium shells in those experiments are all in a high adiabat state at the time of peak implosion velocity. The major difference between low-foot and high-foot capsule experiments is the growth of the shock-inducedmore » instabilities developed at the material interfaces which lead to fuel mixing with ablator material. Furthermore, we have compared the NIF capsules performance with the ignition criteria and analyzed the alpha particle heating in the NIF experiments. Our analysis shows that alpha heating was appreciable only in the high-foot experiments.« less
Collective behavior of brain tumor cells: The role of hypoxia
NASA Astrophysics Data System (ADS)
Khain, Evgeniy; Katakowski, Mark; Hopkins, Scott; Szalad, Alexandra; Zheng, Xuguang; Jiang, Feng; Chopp, Michael
2011-03-01
We consider emergent collective behavior of a multicellular biological system. Specifically, we investigate the role of hypoxia (lack of oxygen) in migration of brain tumor cells. We performed two series of cell migration experiments. In the first set of experiments, cell migration away from a tumor spheroid was investigated. The second set of experiments was performed in a typical wound-healing geometry: Cells were placed on a substrate, a scratch was made, and cell migration into the gap was investigated. Experiments show a surprising result: Cells under normal and hypoxic conditions have migrated the same distance in the “spheroid” experiment, while in the “scratch” experiment cells under normal conditions migrated much faster than under hypoxic conditions. To explain this paradox, we formulate a discrete stochastic model for cell dynamics. The theoretical model explains our experimental observations and suggests that hypoxia decreases both the motility of cells and the strength of cell-cell adhesion. The theoretical predictions were further verified in independent experiments.
Development and Integration of Control System Models
NASA Technical Reports Server (NTRS)
Kim, Young K.
1998-01-01
The computer simulation tool, TREETOPS, has been upgraded and used at NASA/MSFC to model various complicated mechanical systems and to perform their dynamics and control analysis with pointing control systems. A TREETOPS model of Advanced X-ray Astrophysics Facility - Imaging (AXAF-1) dynamics and control system was developed to evaluate the AXAF-I pointing performance for Normal Pointing Mode. An optical model of Shooting Star Experiment (SSE) was also developed and its optical performance analysis was done using the MACOS software.
Thermal Design and Analysis for the Cryogenic MIDAS Experiment
NASA Technical Reports Server (NTRS)
Amundsen, Ruth McElroy
1997-01-01
The Materials In Devices As Superconductors (MIDAS) spaceflight experiment is a NASA payload which launched in September 1996 on the Shuttle, and was transferred to the Mir Space Station for several months of operation. MIDAS was developed and built at NASA Langley Research Center (LaRC). The primary objective of the experiment was to determine the effects of microgravity and spaceflight on the electrical properties of high-temperature superconductive (HTS) materials. The thermal challenge on MIDAS was to maintain the superconductive specimens at or below 80 K for the entire operation of the experiment, including all ground testing and 90 days of spaceflight operation. Cooling was provided by a small tactical cryocooler. The superconductive specimens and the coldfinger of the cryocooler were mounted in a vacuum chamber, with vacuum levels maintained by an ion pump. The entire experiment was mounted for operation in a stowage locker inside Mir, with the only heat dissipation capability provided by a cooling fan exhausting to the habitable compartment. The thermal environment on Mir can potentially vary over the range 5 to 40 C; this was the range used in testing, and this wide range adds to the difficulty in managing the power dissipated from the experiment's active components. Many issues in the thermal design are discussed, including: thermal isolation methods for the cryogenic samples; design for cooling to cryogenic temperatures; cryogenic epoxy bonds; management of ambient temperature components self-heating; and fan cooling of the enclosed locker. Results of the design are also considered, including the thermal gradients across the HTS samples and cryogenic thermal strap, electronics and thermal sensor cryogenic performance, and differences between ground and flight performance. Modeling was performed in both SINDA-85 and MSC/PATRAN (with direct geometry import from the CAD design tool Pro/Engineer). Advantages of both types of models are discussed. Correlation of several models to ground testing and flight data (where available) is presented. Both SINDA and PATRAN models predicted the actual thermal performance of the experiment well, even without post-flight correlation adjustments of the models.
Controlled laboratory experiments and modeling of vegetative filter strips with shallow water tables
NASA Astrophysics Data System (ADS)
Fox, Garey A.; Muñoz-Carpena, Rafael; Purvis, Rebecca A.
2018-01-01
Natural or planted vegetation at the edge of fields or adjacent to streams, also known as vegetative filter strips (VFS), are commonly used as an environmental mitigation practice for runoff pollution and agrochemical spray drift. The VFS position in lowlands near water bodies often implies the presence of a seasonal shallow water table (WT). In spite of its potential importance, there is limited experimental work that systematically studies the effect of shallow WTs on VFS efficacy. Previous research recently coupled a new physically based algorithm describing infiltration into soils bounded by a water table into the VFS numerical overland flow and transport model, VFSMOD, to simulate VFS dynamics under shallow WT conditions. In this study, we tested the performance of the model against laboratory mesoscale data under controlled conditions. A laboratory soil box (1.0 m wide, 2.0 m long, and 0.7 m deep) was used to simulate a VFS and quantify the influence of shallow WTs on runoff. Experiments included planted Bermuda grass on repacked silt loam and sandy loam soils. A series of experiments were performed including a free drainage case (no WT) and a static shallow water table (0.3-0.4 m below ground surface). For each soil type, this research first calibrated VFSMOD to the observed outflow hydrograph for the free drainage experiments to parameterize the soil hydraulic and vegetation parameters, and then evaluated the model based on outflow hydrographs for the shallow WT experiments. This research used several statistical metrics and a new approach based on hypothesis testing of the Nash-Sutcliffe model efficiency coefficient (NSE) to evaluate model performance. The new VFSMOD routines successfully simulated the outflow hydrographs under both free drainage and shallow WT conditions. Statistical metrics considered the model performance valid with greater than 99.5% probability across all scenarios. This research also simulated the shallow water table experiments with both free drainage and various water table depths to quantify the effect of assuming the former boundary condition. For these two soil types, shallow WTs within 1.0-1.2 m below the soil surface influenced infiltration. Existing models will suggest a more protective vegetative filter strip than what actually exists if shallow water table conditions are not considered.
NASA Astrophysics Data System (ADS)
Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.
2012-12-01
Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project on earthquake predictability research. The final goal of this project is to search for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined CSEP and started the Japanese testing center called as CSEP-Japan. This testing center provides an open access to researchers contributing earthquake forecast models applied to Japan. Now more than 100 earthquake forecast models were submitted on the prospective experiment. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by CSEP. The total number of experiments was implemented for approximately 300 rounds. These results provide new knowledge concerning statistical forecasting models. We started a study for constructing a 3-dimensional earthquake forecasting model for Kanto district in Japan based on CSEP experiments under the Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters. Because seismicity of the area ranges from shallower part to a depth of 80 km due to subducting Philippine Sea plate and Pacific plate, we need to study effect of depth distribution. We will develop models for forecasting based on the results of 2-D modeling. We defined the 3D - forecasting area in the Kanto region with test classes of 1 day, 3 months, 1 year and 3 years, and magnitudes from 4.0 to 9.0 as in CSEP-Japan. In the first step of the study, we will install RI10K model (Nanjo, 2011) and the HISTETAS models (Ogata, 2011) to know if those models have good performance as in the 3 months 2-D CSEP-Japan experiments in the Kanto region before the 2011 Tohoku event (Yokoi et al., in preparation). We use CSEP-Japan experiments as a starting model of non-divided column in a depth. In the presentation, we will discuss the performance of the models comparing results of the Kanto district with those obtained in all over Japan by CSEP-Japan and also add to discuss the results of the 3-month experiments after the 2011 Tohoku earthquake to understand the learning ability of the models associated with recent seismicity of the area.
NASA Astrophysics Data System (ADS)
Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter
2016-10-01
This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.
Efficacy of Multimedia Instruction and an Introduction to Digital Multimedia Technology
1992-07-01
performed by Bandura , Ro3s and Ross (1961). They found that children exposed to an adult displaying aggression toward a Bobo doll later also performed...and enjoy successful task performance. 7 Modeling Bandura (1969) describes modeling as the ability of individuals to learn a behavior or attitude... Bandura argued that all learning involving direct reinforcement could also result from observation. A classic study of modeling is an experiment
Panzer, Stefan; Kennedy, Deanna; Wang, Chaoyi; Shea, Charles H
2018-02-01
An experiment was conducted to determine if the performance and learning of a multi-frequency (1:2) coordination pattern between the limbs are enhanced when a model is provided prior to each acquisition trial. Research has indicated very effective performance of a wide variety of bimanual coordination tasks when Lissajous plots with goal templates are provided, but this research has also found that participants become dependent on this information and perform quite poorly when it is withdrawn. The present experiment was designed to test three forms of modeling (Lissajous with template, Lissajous without template, and limb model), but in each situations, the model was presented prior to practice and not available during the performance of the task. This was done to decrease dependency on the model and increase the development of an internal reference of correctness that could be applied on test trials. A control condition was also collected, where a metronome was used to guide the movement. Following less than 7 min of practice, participants in the three modeling conditions performed the first test block very effectively; however, performance of the control condition was quite poor. Note that Test 1 was performed under the same conditions as used during acquisition. Test 2 was conducted with no augmented information provided prior to or during the performance of the task. Only participants in the limb model condition were able to maintain performance on Test 2. The findings suggest that a very simple intuitive display can provide the necessary information to form an effective internal representation of the coordination pattern which can be used guide performance when the augmented display is withdrawn.
Characterization and Low-Dimensional Modeling of Urban Fluid Flow
2014-10-06
4 2 Wind Tunnel , Apparatus and Data Processing 7 2.1 Modelling of the Atmospheric Boundary Layer...was demonstrated. Most notably, wind tunnel experiments were performed at a number of different angles of incidence, providing for the first time a...Coceal and Belcher [2004] developed an urban canopy model for mean winds in urban areas that compares well with data from wind tunnel experiments
Probing eukaryotic cell mechanics via mesoscopic simulations
NASA Astrophysics Data System (ADS)
Pivkin, Igor V.; Lykov, Kirill; Nematbakhsh, Yasaman; Shang, Menglin; Lim, Chwee Teck
2017-11-01
We developed a new mesoscopic particle based eukaryotic cell model which takes into account cell membrane, cytoskeleton and nucleus. The breast epithelial cells were used in our studies. To estimate the viscoelastic properties of cells and to calibrate the computational model, we performed micropipette aspiration experiments. The model was then validated using data from microfluidic experiments. Using the validated model, we probed contributions of sub-cellular components to whole cell mechanics in micropipette aspiration and microfluidics experiments. We believe that the new model will allow to study in silico numerous problems in the context of cell biomechanics in flows in complex domains, such as capillary networks and microfluidic devices.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
Attention in a multi-task environment
NASA Technical Reports Server (NTRS)
Andre, Anthony D.; Heers, Susan T.
1993-01-01
Two experiments used a low fidelity multi-task simulation to investigate the effects of cue specificity on task preparation and performance. Subjects performed a continuous compensatory tracking task and were periodically prompted to perform one of several concurrent secondary tasks. The results provide strong evidence that subjects enacted a strategy to actively divert resources towards secondary task preparation only when they had specific information about an upcoming task to be performed. However, this strategy was not as much affected by the type of task cued (Experiment 1) or its difficulty level (Experiment 2). Overall, subjects seemed aware of both the costs (degraded primary task tracking) and benefits (improved secondary task performance) of cue information. Implications of the present results for computational human performance/workload models are discussed.
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
Tos, M; Charabi, S; Thomsen, J
1998-01-01
The Danish model for vestibular schwannoma (VS) surgery has been influenced by some historical otological events, taking its origin in the fact that the first attempt to remove CPA tumors was performed by an otologist in 1916. In approximately 50 years VS surgery was performed by neurosurgeons in a decentralized model. Highly specialized neuro- and otosurgeons have been included in our team since the early beginning of the centralized Danish model of VS surgery in 1976. Our surgical practice has always been performed on the basis of known and proven knowledge, but we spared no effort to search for innovative procedures. The present paper reflects the experience we have gained in two decades of VS surgery. Our studies on the incidence, symptomatology, diagnosis, expectancy and surgical results are presented.
Modeling take-over performance in level 3 conditionally automated vehicles.
Gold, Christian; Happee, Riender; Bengler, Klaus
2018-07-01
Taking over vehicle control from a Level 3 conditionally automated vehicle can be a demanding task for a driver. The take-over determines the controllability of automated vehicle functions and thereby also traffic safety. This paper presents models predicting the main take-over performance variables take-over time, minimum time-to-collision, brake application and crash probability. These variables are considered in relation to the situational and driver-related factors time-budget, traffic density, non-driving-related task, repetition, the current lane and driver's age. Regression models were developed using 753 take-over situations recorded in a series of driving simulator experiments. The models were validated with data from five other driving simulator experiments of mostly unrelated authors with another 729 take-over situations. The models accurately captured take-over time, time-to-collision and crash probability, and moderately predicted the brake application. Especially the time-budget, traffic density and the repetition strongly influenced the take-over performance, while the non-driving-related tasks, the lane and drivers' age explained a minor portion of the variance in the take-over performances. Copyright © 2017 Elsevier Ltd. All rights reserved.
Matching voice and face identity from static images.
Mavica, Lauren W; Barenholtz, Elan
2013-04-01
Previous research has suggested that people are unable to correctly choose which unfamiliar voice and static image of a face belong to the same person. Here, we present evidence that people can perform this task with greater than chance accuracy. In Experiment 1, participants saw photographs of two, same-gender models, while simultaneously listening to a voice recording of one of the models pictured in the photographs and chose which of the two faces they thought belonged to the same model as the recorded voice. We included three conditions: (a) the visual stimuli were frontal headshots (including the neck and shoulders) and the auditory stimuli were recordings of spoken sentences; (b) the visual stimuli only contained cropped faces and the auditory stimuli were full sentences; (c) we used the same pictures as Condition 1 but the auditory stimuli were recordings of a single word. In Experiment 2, participants performed the same task as in Condition 1 of Experiment 1 but with the stimuli presented in sequence. Participants also rated the model's faces and voices along multiple "physical" dimensions (e.g., weight,) or "personality" dimensions (e.g., extroversion); the degree of agreement between the ratings for each model's face and voice was compared to performance for that model in the matching task. In all three conditions, we found that participants chose, at better than chance levels, which faces and voices belonged to the same person. Performance in the matching task was not correlated with the degree of agreement on any of the rated dimensions.
The role of man in flight experiment payload missions. Volume 2: Appendices
NASA Technical Reports Server (NTRS)
Malone, T. B.
1973-01-01
In the study to determine the role of man in Sortie Lab operations, a functional model of a generalized experiment system was developed. The results are presented of a requirements analysis which was conducted to identify performance requirements, information requirements, and interface requirements associated with each function in the model.
Virtual geotechnical laboratory experiments using a simulator
NASA Astrophysics Data System (ADS)
Penumadu, Dayakar; Zhao, Rongda; Frost, David
2000-04-01
The details of a test simulator that provides a realistic environment for performing virtual laboratory experimentals in soil mechanics is presented. A computer program Geo-Sim that can be used to perform virtual experiments, and allow for real-time observations of material response is presented. The results of experiments, for a given set of input parameters, are obtained with the test simulator using well-trained artificial neural-network-based soil models for different soil types and stress paths. Multimedia capabilities are integrated in Geo-Sim, using software that links and controls a laser disc player with a real-time parallel processing ability. During the simulation of a virtual experiment, relevant portions of the video image of a previously recorded test on an actual soil specimen are dispalyed along with the graphical presentation of response from the feedforward ANN model predictions. The pilot simulator developed to date includes all aspects related to performing a triaxial test on cohesionless soil under undrained and drained conditions. The benefits of the test simulator are also presented.
CHARACTERIZING DETONATING LX-17 CHARGES CROSSING A TRANSVERSE AIR GAP WITH EXPERIMENTS AND MODELING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lauderbach, L M; Souers, P C; Garcia, F
2009-06-26
Experiments were performed using detonating LX-17 (92.5% TATB, 7.5% Kel-F by weight) charges with various width transverse air gaps with manganin peizoresistive in-situ gauges present. The experiments, performed with 25 mm diameter by 25 mm long LX-17 pellets with the transverse air gap in between, showed that transverse gaps up to about 3 mm could be present without causing the detonation wave to fail to continue as a detonation. The Tarantula/JWL{sup ++} code was utilized to model the results and compare with the in-situ gauge records with some agreement to the experimental data with additional work needed for a bettermore » match to the data. This work will present the experimental details as well as comparison to the model results.« less
Li, Dongrui; Cheng, Zhigang; Chen, Gang; Liu, Fangyi; Wu, Wenbo; Yu, Jie; Gu, Ying; Liu, Fengyong; Ren, Chao; Liang, Ping
2018-04-03
To test the accuracy and efficacy of the multimodality imaging-compatible insertion robot with a respiratory motion calibration module designed for ablation of liver tumors in phantom and animal models. To evaluate and compare the influences of intervention experience on robot-assisted and ultrasound-controlled ablation procedures. Accuracy tests on rigid body/phantom model with a respiratory movement simulation device and microwave ablation tests on porcine liver tumor/rabbit liver cancer were performed with the robot we designed or with the traditional ultrasound-guidance by physicians with or without intervention experience. In the accuracy tests performed by the physicians without intervention experience, the insertion accuracy and efficiency of robot-assisted group was higher than those of ultrasound-guided group with statistically significant differences. In the microwave ablation tests performed by the physicians without intervention experience, better complete ablation rate was achieved when applying the robot. In the microwave ablation tests performed by the physicians with intervention experience, there was no statistically significant difference of the insertion number and total ablation time between the robot-assisted group and the ultrasound-controlled group. The evaluation by the NASA-TLX suggested that the robot-assisted insertion and microwave ablation process performed by physicians with or without experience were more comfortable. The multimodality imaging-compatible insertion robot with a respiratory motion calibration module designed for ablation of liver tumors could increase the insertion accuracy and ablation efficacy, and minimize the influence of the physicians' experience. The ablation procedure could be more comfortable with less stress with the application of the robot.
von Stosch, Moritz; Hamelink, Jan-Martijn; Oliveira, Rui
2016-09-01
In this study, step variations in temperature, pH, and carbon substrate feeding rate were performed within five high cell density Escherichia coli fermentations to assess whether intraexperiment step changes, can principally be used to exploit the process operation space in a design of experiment manner. A dynamic process modeling approach was adopted to determine parameter interactions. A bioreactor model was integrated with an artificial neural network that describes biomass and product formation rates as function of varied fed-batch fermentation conditions for heterologous protein production. A model reliability measure was introduced to assess in which process region the model can be expected to predict process states accurately. It was found that the model could accurately predict process states of multiple fermentations performed at fixed conditions within the determined validity domain. The results suggest that intraexperimental variations of process conditions could be used to reduce the number of experiments by a factor, which in limit would be equivalent to the number of intraexperimental variations per experiment. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1343-1352, 2016. © 2016 American Institute of Chemical Engineers.
Human judgment vs. quantitative models for the management of ecological resources.
Holden, Matthew H; Ellner, Stephen P
2016-07-01
Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed students using experience and judgment 66% of the time. © 2016 by the Ecological Society of America.
Leakage flow simulation in a specific pump model
NASA Astrophysics Data System (ADS)
Dupont, P.; Bayeul-Lainé, A. C.; Dazin, A.; Bois, G.; Roussette, O.; Si, Q.
2014-03-01
This paper deals with the influence of leakage flow existing in SHF pump model on the analysis of internal flow behaviour inside the vane diffuser of the pump model performance using both experiments and calculations. PIV measurements have been performed at different hub to shroud planes inside one diffuser channel passage for a given speed of rotation and various flow rates. For each operating condition, the PIV measurements have been trigged with different angular impeller positions. The performances and the static pressure rise of the diffuser were also measured using a three-hole probe. The numerical simulations were carried out with Star CCM+ 8.06 code (RANS frozen and unsteady calculations). Comparisons between numerical and experimental results are presented and discussed for three flow rates. The performances of the diffuser obtained by numerical simulation results are compared to the performances obtained by three-hole probe indications. The comparisons show few influence of fluid leakage on global performances but a real improvement concerning the efficiency of the impeller, the pump and the velocity distributions. These results show that leakage is an important parameter that has to be taken into account in order to make improved comparisons between numerical approaches and experiments in such a specific model set up.
Markov Jump-Linear Performance Models for Recoverable Flight Control Computers
NASA Technical Reports Server (NTRS)
Zhang, Hong; Gray, W. Steven; Gonzalez, Oscar R.
2004-01-01
Single event upsets in digital flight control hardware induced by atmospheric neutrons can reduce system performance and possibly introduce a safety hazard. One method currently under investigation to help mitigate the effects of these upsets is NASA Langley s Recoverable Computer System. In this paper, a Markov jump-linear model is developed for a recoverable flight control system, which will be validated using data from future experiments with simulated and real neutron environments. The method of tracking error analysis and the plan for the experiments are also described.
Organizational and Market Influences on Physician Performance on Patient Experience Measures
Rodriguez, Hector P; von Glahn, Ted; Rogers, William H; Safran, Dana Gelb
2009-01-01
Objective To examine the extent to which medical group and market factors are related to individual primary care physician (PCP) performance on patient experience measures. Data Sources This study employs Clinician and Group CAHPS survey data (n=105,663) from 2,099 adult PCPs belonging to 34 diverse medical groups across California. Medical group directors were interviewed to assess the magnitude and nature of financial incentives directed at individual physicians and the adoption of patient experience improvement strategies. Primary care services area (PCSA) data were used to characterize the market environment of physician practices. Study Design We used multilevel models to estimate the relationship between medical group and market factors and physician performance on each Clinician and Group CAHPS measure. Models statistically controlled for respondent characteristics and accounted for the clustering of respondents within physicians, physicians within medical groups, and medical groups within PCSAs using random effects. Principal Findings Compared with physicians belonging to independent practice associations, physicians belonging to integrated medical groups had better performance on the communication (p=.007) and care coordination (p=.03) measures. Physicians belonging to medical groups with greater numbers of PCPs had better performance on all measures. The use of patient experience improvement strategies was not associated with performance. Greater emphasis on productivity and efficiency criteria in individual physician financial incentive formulae was associated with worse access to care (p=.04). Physicians located in PCSAs with higher area-level deprivation had worse performance on the access to care (p=.04) and care coordination (p<.001) measures. Conclusions Physicians from integrated medical groups and groups with greater numbers of PCPs performed better on several patient experience measures, suggesting that organized care processes adopted by these groups may enhance patients' experiences. Physicians practicing in markets with high concentrations of vulnerable populations may be disadvantaged by constraints that affect performance. Future studies should clarify the extent to which performance deficits associated with area-level deprivation are modifiable. PMID:19674429
Organizational and market influences on physician performance on patient experience measures.
Rodriguez, Hector P; von Glahn, Ted; Rogers, William H; Safran, Dana Gelb
2009-06-01
To examine the extent to which medical group and market factors are related to individual primary care physician (PCP) performance on patient experience measures. This study employs Clinician and Group CAHPS survey data (n=105,663) from 2,099 adult PCPs belonging to 34 diverse medical groups across California. Medical group directors were interviewed to assess the magnitude and nature of financial incentives directed at individual physicians and the adoption of patient experience improvement strategies. Primary care services area (PCSA) data were used to characterize the market environment of physician practices. We used multilevel models to estimate the relationship between medical group and market factors and physician performance on each Clinician and Group CAHPS measure. Models statistically controlled for respondent characteristics and accounted for the clustering of respondents within physicians, physicians within medical groups, and medical groups within PCSAs using random effects. Compared with physicians belonging to independent practice associations, physicians belonging to integrated medical groups had better performance on the communication ( p=.007) and care coordination ( p=.03) measures. Physicians belonging to medical groups with greater numbers of PCPs had better performance on all measures. The use of patient experience improvement strategies was not associated with performance. Greater emphasis on productivity and efficiency criteria in individual physician financial incentive formulae was associated with worse access to care ( p=.04). Physicians located in PCSAs with higher area-level deprivation had worse performance on the access to care ( p=.04) and care coordination ( p<.001) measures. Physicians from integrated medical groups and groups with greater numbers of PCPs performed better on several patient experience measures, suggesting that organized care processes adopted by these groups may enhance patients' experiences. Physicians practicing in markets with high concentrations of vulnerable populations may be disadvantaged by constraints that affect performance. Future studies should clarify the extent to which performance deficits associated with area-level deprivation are modifiable.
Livingstone Model-Based Diagnosis of Earth Observing One Infusion Experiment
NASA Technical Reports Server (NTRS)
Hayden, Sandra C.; Sweet, Adam J.; Christa, Scott E.
2004-01-01
The Earth Observing One satellite, launched in November 2000, is an active earth science observation platform. This paper reports on the progress of an infusion experiment in which the Livingstone 2 Model-Based Diagnostic engine is deployed on Earth Observing One, demonstrating the capability to monitor the nominal operation of the spacecraft under command of an on-board planner, and demonstrating on-board diagnosis of spacecraft failures. Design and development of the experiment, specification and validation of diagnostic scenarios, characterization of performance results and benefits of the model- based approach are presented.
In-Flight Thermal Performance of the Lidar In-Space Technology Experiment
NASA Technical Reports Server (NTRS)
Roettker, William
1995-01-01
The Lidar In-Space Technology Experiment (LITE) was developed at NASA s Langley Research Center to explore the applications of lidar operated from an orbital platform. As a technology demonstration experiment, LITE was developed to gain experience designing and building future operational orbiting lidar systems. Since LITE was the first lidar system to be flown in space, an important objective was to validate instrument design principles in such areas as thermal control, laser performance, instrument alignment and control, and autonomous operations. Thermal and structural analysis models of the instrument were developed during the design process to predict the behavior of the instrument during its mission. In order to validate those mathematical models, extensive engineering data was recorded during all phases of LITE's mission. This inflight engineering data was compared with preflight predictions and, when required, adjustments to the thermal and structural models were made to more accurately match the instrument s actual behavior. The results of this process for the thermal analysis and design of LITE are presented in this paper.
Loft, Shayne; Bolland, Scott; Humphreys, Michael S; Neal, Andrew
2009-06-01
A performance theory for conflict detection in air traffic control is presented that specifies how controllers adapt decisions to compensate for environmental constraints. This theory is then used as a framework for a model that can fit controller intervention decisions. The performance theory proposes that controllers apply safety margins to ensure separation between aircraft. These safety margins are formed through experience and reflect the biasing of decisions to favor safety over accuracy, as well as expectations regarding uncertainty in aircraft trajectory. In 2 experiments, controllers indicated whether they would intervene to ensure separation between pairs of aircraft. The model closely predicted the probability of controller intervention across the geometry of problems and as a function of controller experience. When controller safety margins were manipulated via task instructions, the parameters of the model changed in the predicted direction. The strength of the model over existing and alternative models is that it better captures the uncertainty and decision biases involved in the process of conflict detection. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
A Preliminary Assessment of the SURF Reactive Burn Model Implementation in FLAG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Carl Edward; McCombe, Ryan Patrick; Carver, Kyle
Properly validated and calibrated reactive burn models (RBM) can be useful engineering tools for assessing high explosive performance and safety. Experiments with high explosives are expensive. Inexpensive RBM calculations are increasingly relied on for predictive analysis for performance and safety. This report discusses the validation of Menikoff and Shaw’s SURF reactive burn model, which has recently been implemented in the FLAG code. The LANL Gapstick experiment is discussed as is its’ utility in reactive burn model validation. Data obtained from pRad for the LT-63 series is also presented along with FLAG simulations using SURF for both PBX 9501 and PBXmore » 9502. Calibration parameters for both explosives are presented.« less
Advances in petascale kinetic plasma simulation with VPIC and Roadrunner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Kevin J; Albright, Brian J; Yin, Lin
2009-01-01
VPIC, a first-principles 3d electromagnetic charge-conserving relativistic kinetic particle-in-cell (PIC) code, was recently adapted to run on Los Alamos's Roadrunner, the first supercomputer to break a petaflop (10{sup 15} floating point operations per second) in the TOP500 supercomputer performance rankings. They give a brief overview of the modeling capabilities and optimization techniques used in VPIC and the computational characteristics of petascale supercomputers like Roadrunner. They then discuss three applications enabled by VPIC's unprecedented performance on Roadrunner: modeling laser plasma interaction in upcoming inertial confinement fusion experiments at the National Ignition Facility (NIF), modeling short pulse laser GeV ion acceleration andmore » modeling reconnection in magnetic confinement fusion experiments.« less
Gillen, Sonja; Gröne, Jörn; Knödgen, Fritz; Wolf, Petra; Meyer, Michael; Friess, Helmut; Buhr, Heinz-Johannes; Ritz, Jörg-Peter; Feussner, Hubertus; Lehmann, Kai S
2012-08-01
Natural orifice translumenal endoscopic surgery (NOTES) is a new surgical concept that requires training before it is introduced into clinical practice. The endoscopic–laparoscopic interdisciplinary training entity (ELITE) is a training model for NOTES interventions. The latest research has concentrated on new materials for organs with realistic optical and haptic characteristics and the possibility of high-frequency dissection. This study aimed to assess both the ELITE model in a surgical training course and the construct validity of a newly developed NOTES appendectomy scenario. The 70 attendees of the 2010 Practical Course for Visceral Surgery (Warnemuende, Germany) took part in the study and performed a NOTES appendectomy via a transsigmoidal access. The primary end point was the total time required for the appendectomy, including retrieval of the appendix. Subjective evaluation of the model was performed using a questionnaire. Subgroups were analyzed according to laparoscopic and endoscopic experience. The participants with endoscopic or laparoscopic experience completed the task significantly faster than the inexperienced participants (p = 0.009 and 0.019, respectively). Endoscopic experience was the strongest influencing factor, whereas laparoscopic experience had limited impact on the participants with previous endoscopic experience. As shown by the findings, 87.3% of the participants stated that the ELITE model was suitable for the NOTES training scenario, and 88.7% found the newly developed model anatomically realistic. This study was able to establish face and construct validity for the ELITE model with a large group of surgeons. The ELITE model seems to be well suited for the training of NOTES as a new surgical technique in an established gastrointestinal surgery skills course.
Systematic Analysis of Hollow Fiber Model of Tuberculosis Experiments.
Pasipanodya, Jotam G; Nuermberger, Eric; Romero, Klaus; Hanna, Debra; Gumbo, Tawanda
2015-08-15
The in vitro hollow fiber system model of tuberculosis (HFS-TB), in tandem with Monte Carlo experiments, was introduced more than a decade ago. Since then, it has been used to perform a large number of tuberculosis pharmacokinetics/pharmacodynamics (PK/PD) studies that have not been subjected to systematic analysis. We performed a literature search to identify all HFS-TB experiments published between 1 January 2000 and 31 December 2012. There was no exclusion of articles by language. Bias minimization was according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Steps for reporting systematic reviews were followed. There were 22 HFS-TB studies published, of which 12 were combination therapy studies and 10 were monotherapy studies. There were 4 stand-alone Monte Carlo experiments that utilized quantitative output from the HFS-TB. All experiments reported drug pharmacokinetics, which recapitulated those encountered in humans. HFS-TB studies included log-phase growth studies under ambient air, semidormant bacteria at pH 5.8, and nonreplicating persisters at low oxygen tension of ≤ 10 parts per billion. The studies identified antibiotic exposures associated with optimal kill of Mycobacterium tuberculosis and suppression of acquired drug resistance (ADR) and informed predictions about optimal clinical doses, expected performance of standard doses and regimens in patients, and expected rates of ADR, as well as a proposal of new susceptibility breakpoints. The HFS-TB model offers the ability to perform PK/PD studies including humanlike drug exposures, to identify bactericidal and sterilizing effect rates, and to identify exposures associated with suppression of drug resistance. Because of the ability to perform repetitive sampling from the same unit over time, the HFS-TB vastly improves statistical power and facilitates the execution of time-to-event analyses and repeated event analyses, as well as dynamic system pharmacology mathematical models. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Luria-Delbrück Revisited: The Classic Experiment Doesn't Rule out Lamarckian Evolution
NASA Astrophysics Data System (ADS)
Holmes, Caroline; Ghafari, Mahan; Abbas, Anzar; Saravanan, Varun; Nemenman, Ilya
We re-examine data from the classic 1943 Luria-Delbruck fluctuation experiment. This experiment is often credited with establishing that phage resistance in bacteria is acquired through a Darwinian mechanism (natural selection on standing variation) rather than through a Lamarckian mechanism (environmentally induced mutations). We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms. Analysis of the combined model was not performed in the 1943 paper, and nor was analysis of the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that: 1) all datasets from the paper favor Darwinian over purely Lamarckian evolution, 2) some of the datasets are unable to distinguish between the purely Darwinian and the combined models, and 3) the other datasets cannot be explained by any of the models considered. In summary, the classic experiment cannot rule out Lamarckian contributions to the evolutionary dynamics. This work was supported by National Science Foundation Grant 1410978, NIH training Grant 5R90DA033462, and James S. McDonnell Foundation Grant 220020321.
High subsonic flow tests of a parallel pipe followed by a large area ratio diffuser
NASA Technical Reports Server (NTRS)
Barna, P. S.
1975-01-01
Experiments were performed on a pilot model duct system in order to explore its aerodynamic characteristics. The model was scaled from a design projected for the high speed operation mode of the Aircraft Noise Reduction Laboratory. The test results show that the model performed satisfactorily and therefore the projected design will most likely meet the specifications.
NASA Astrophysics Data System (ADS)
Hübler, M.; Gurka, M.; Schmeer, S.; Breuer, U. P.
2013-09-01
In this contribution we present a comprehensive theoretical and experimental description of an active shape memory alloy (SMA) fiber reinforced composite (FRP) hybrid structure. The major influences on actuation performance arising from variations in the design and manufacturing process are discussed, utilizing a new phenomenological model to describe the actuating SMA material. The different material properties for the activated, respective the unactivated, SMA as well as the influence of different loading conditions or pre-treatment of the material are taken into account in this model. To validate our material model we performed new actuation experiments with an exemplary SMA-FRP structure, which we compared to finite element (FE) simulation results. Our FE-model is based on a material model for the actuating SMA elements derived from experiments and data on the actual microscopic geometry of the hybrid composite. Therefore it is able to predict very precisely the actuation behavior of a typical FRP structure for industrial use cases: a thin walled CFRP sheet with SMA wires attached to the top for performing a bending motion with a maximum deflection of approx. 25% of its length.
Papantoniou, Panagiotis
2018-04-03
The present research relies on 2 main objectives. The first is to investigate whether latent model analysis through a structural equation model can be implemented on driving simulator data in order to define an unobserved driving performance variable. Subsequently, the second objective is to investigate and quantify the effect of several risk factors including distraction sources, driver characteristics, and road and traffic environment on the overall driving performance and not in independent driving performance measures. For the scope of the present research, 95 participants from all age groups were asked to drive under different types of distraction (conversation with passenger, cell phone use) in urban and rural road environments with low and high traffic volume in a driving simulator experiment. Then, in the framework of the statistical analysis, a correlation table is presented investigating any of a broad class of statistical relationships between driving simulator measures and a structural equation model is developed in which overall driving performance is estimated as a latent variable based on several individual driving simulator measures. Results confirm the suitability of the structural equation model and indicate that the selection of the specific performance measures that define overall performance should be guided by a rule of representativeness between the selected variables. Moreover, results indicate that conversation with the passenger was not found to have a statistically significant effect, indicating that drivers do not change their performance while conversing with a passenger compared to undistracted driving. On the other hand, results support the hypothesis that cell phone use has a negative effect on driving performance. Furthermore, regarding driver characteristics, age, gender, and experience all have a significant effect on driving performance, indicating that driver-related characteristics play the most crucial role in overall driving performance. The findings of this study allow a new approach to the investigation of driving behavior in driving simulator experiments and in general. By the successful implementation of the structural equation model, driving behavior can be assessed in terms of overall performance and not through individual performance measures, which allows an important scientific step forward from piecemeal analyses to a sound combined analysis of the interrelationship between several risk factors and overall driving performance.
Goyal, Saumitra; Radi, Mohamed Abdel; Ramadan, Islam Karam-allah; Said, Hatem Galal
2016-01-01
Purpose: Arthroscopic skills training outside the operative room may decrease risks and errors by trainee surgeons. There is a need of simple objective method for evaluating proficiency and skill of arthroscopy trainees using simple bench model of arthroscopic simulator. The aim of this study is to correlate motor task performance to level of prior arthroscopic experience and establish benchmarks for training modules. Methods: Twenty orthopaedic surgeons performed a set of tasks to assess a) arthroscopic triangulation, b) navigation, c) object handling and d) meniscus trimming using SAWBONES “FAST” arthroscopy skills workstation. Time to completion and the errors were computed. The subjects were divided into four levels; “Novice”, “Beginner”, “Intermediate” and “Advanced” based on previous arthroscopy experience, for analyses of performance. Results: The task performance under transparent dome was not related to experience of the surgeon unlike opaque dome, highlighting the importance of hand-eye co-ordination required in arthroscopy. Median time to completion for each task improved as the level of experience increased and this was found to be statistically significant (p < .05) e.g. time for maze navigation (Novice – 166 s, Beginner – 135.5 s, Intermediate – 100 s, Advance – 97.5 s) and the similar results for all tasks. Majority (>85%) of subjects across all the levels reported improvement in performance with sequential tasks. Conclusion: Use of the arthroscope requires visuo-spatial coordination which is a skill that develops with practice. This simple box model can reliably differentiate the arthroscopic skills based on experience and can be used to monitor progression of skills of trainees in institutions. PMID:27801643
Posttest analysis of LOFT LOCE L2-3 using the ESA RELAP4 blowdown model. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perryman, J.L.; Samuels, T.K.; Cooper, C.H.
A posttest analysis of the blowdown portion of Loss-of-Coolant Experiment (LOCE) L2-3, which was conducted in the Loss-of-Fluid Test (LOFT) facility, was performed using the experiment safety analysis (ESA) RELAP4/MOD5 computer model. Measured experimental parameters were compared with the calculations in order to assess the conservatisms in the ESA RELAP4/MOD5 model.
Analytical Model For Fluid Dynamics In A Microgravity Environment
NASA Technical Reports Server (NTRS)
Naumann, Robert J.
1995-01-01
Report presents analytical approximation methodology for providing coupled fluid-flow, heat, and mass-transfer equations in microgravity environment. Experimental engineering estimates accurate to within factor of 2 made quickly and easily, eliminating need for time-consuming and costly numerical modeling. Any proposed experiment reviewed to see how it would perform in microgravity environment. Model applied in commercial setting for preliminary design of low-Grashoff/Rayleigh-number experiments.
NASA Astrophysics Data System (ADS)
Rodríguez-Escales, Paula; Folch, Albert; van Breukelen, Boris M.; Vidal-Gavilan, Georgina; Sanchez-Vila, Xavier
2016-07-01
Enhanced In situ Biodenitrification (EIB) is a capable technology for nitrate removal in subsurface water resources. Optimizing the performance of EIB implies devising an appropriate feeding strategy involving two design parameters: carbon injection frequency and C:N ratio of the organic substrate nitrate mixture. Here we model data on the spatial and temporal evolution of nitrate (up to 1.2 mM), organic carbon (ethanol), and biomass measured during a 342 day-long laboratory column experiment (published in Vidal-Gavilan et al., 2014). Effective porosity was 3% lower and dispersivity had a sevenfold increase at the end of the experiment as compared to those at the beginning. These changes in transport parameters were attributed to the development of a biofilm. A reactive transport model explored the EIB performance in response to daily and weekly feeding strategies. The latter resulted in significant temporal variation in nitrate and ethanol concentrations at the outlet of the column. On the contrary, a daily feeding strategy resulted in quite stable and low concentrations at the outlet and complete denitrification. At intermediate times (six months of experiment), it was possible to reduce the carbon load and consequently the C:N ratio (from 2.5 to 1), partly because biomass decay acted as endogenous carbon to respiration, keeping the denitrification rates, and partly due to the induced dispersivity caused by the well-developed biofilm, resulting in enhancement of mixing between the ethanol and nitrate and the corresponding improvement of denitrification rates. The inclusion of a dual-domain model improved the fit at the last days of the experiment as well as in the tracer test performed at day 342, demonstrating a potential transition to anomalous transport that may be caused by the development of biofilm. This modeling work is a step forward to devising optimal injection conditions and substrate rates to enhance EIB performance by minimizing the overall supply of electron donor, and thus the cost of the remediation strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheehey, P.T.; Faehl, R.J.; Kirkpatrick, R.C.
1997-12-31
Magnetized Target Fusion (MTF) experiments, in which a preheated and magnetized target plasma is hydrodynamically compressed to fusion conditions, present some challenging computational modeling problems. Recently, joint experiments relevant to MTF (Russian acronym MAGO, for Magnitnoye Obzhatiye, or magnetic compression) have been performed by Los Alamos National Laboratory and the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF). Modeling of target plasmas must accurately predict plasma densities, temperatures, fields, and lifetime; dense plasma interactions with wall materials must be characterized. Modeling of magnetically driven imploding solid liners, for compression of target plasmas, must address issues such as Rayleigh-Taylor instability growthmore » in the presence of material strength, and glide plane-liner interactions. Proposed experiments involving liner-on-plasma compressions to fusion conditions will require integrated target plasma and liner calculations. Detailed comparison of the modeling results with experiment will be presented.« less
Theory and performance of plated thermocouples.
NASA Technical Reports Server (NTRS)
Pesko, R. N.; Ash, R. L.; Cupschalk, S. G.; Germain, E. F.
1972-01-01
A theory has been developed to describe the performance of thermocouples which have been formed by electroplating portions of one thermoelectric material with another. The electroplated leg of the thermocouple was modeled as a collection of infinitesimally small homogeneous thermocouples connected in series. Experiments were performed using several combinations of Constantan wire sizes and copper plating thicknesses. A transient method was used to develop the thermoelectric calibrations, and the theory was found to be in quite good agreement with the experiments. In addition, data gathered in a Soviet experiment were also found to be in close agreement with the theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soukhanovskii, V. A.
2017-09-13
A successful high-performance plasma operation with a radiative divertor has been demonstrated on many tokamak devices, however, significant uncertainty remains in accurately modeling detachment thresholds, and in how detachment depends on divertor geometry. Whereas it was originally planned to perform dedicated divertor experiments on the National Spherical Tokamak Upgrade to address critical detachment and divertor geometry questions for this milestone, the experiments were deferred due to technical difficulties. Instead, existing NSTX divertor data was summarized and re-analyzed where applicable, and additional simulations were performed.
Skylab M518 multipurpose furnace convection analysis
NASA Technical Reports Server (NTRS)
Bourgeois, S. V.; Spradley, L. W.
1975-01-01
An analysis was performed of the convection which existed on ground tests and during skylab processing of two experiments: vapor growth of IV-VI compounds growth of spherical crystals. A parallel analysis was also performed on Skylab experiment indium antimonide crystals because indium antimonide (InSb) was used and a free surface existed in the tellurium-doped Skylab III sample. In addition, brief analyses were also performed of the microsegregation in germanium experiment because the Skylab crystals indicated turbulent convection effects. Simple dimensional analysis calculations and a more accurate, but complex, convection computer model, were used in the analysis.
NASA Astrophysics Data System (ADS)
Maljaars, E.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J. M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A. A.; Vu, N. M. T.; The EUROfusion MST1-team; The TCV-team
2017-12-01
The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety factor profile (q-profile) and kinetic plasma parameters such as the plasma beta. This demands to establish reliable profile control routines in presently operational tokamaks. We present a model predictive profile controller that controls the q-profile and plasma beta using power requests to two clusters of gyrotrons and the plasma current request. The performance of the controller is analyzed in both simulation and TCV L-mode discharges where successful tracking of the estimated inverse q-profile as well as plasma beta is demonstrated under uncertain plasma conditions and the presence of disturbances. The controller exploits the knowledge of the time-varying actuator limits in the actuator input calculation itself such that fast transitions between targets are achieved without overshoot. A software environment is employed to prepare and test this and three other profile controllers in parallel in simulations and experiments on TCV. This set of tools includes the rapid plasma transport simulator RAPTOR and various algorithms to reconstruct the plasma equilibrium and plasma profiles by merging the available measurements with model-based predictions. In this work the estimated q-profile is merely based on RAPTOR model predictions due to the absence of internal current density measurements in TCV. These results encourage to further exploit model predictive profile control in experiments on TCV and other (future) tokamaks.
Unpacking buyer-seller differences in valuation from experience: A cognitive modeling approach.
Pachur, Thorsten; Scheibehenne, Benjamin
2017-12-01
People often indicate a higher price for an object when they own it (i.e., as sellers) than when they do not (i.e., as buyers)-a phenomenon known as the endowment effect. We develop a cognitive modeling approach to formalize, disentangle, and compare alternative psychological accounts (e.g., loss aversion, loss attention, strategic misrepresentation) of such buyer-seller differences in pricing decisions of monetary lotteries. To also be able to test possible buyer-seller differences in memory and learning, we study pricing decisions from experience, obtained with the sampling paradigm, where people learn about a lottery's payoff distribution from sequential sampling. We first formalize different accounts as models within three computational frameworks (reinforcement learning, instance-based learning theory, and cumulative prospect theory), and then fit the models to empirical selling and buying prices. In Study 1 (a reanalysis of published data with hypothetical decisions), models assuming buyer-seller differences in response bias (implementing a strategic-misrepresentation account) performed best; models assuming buyer-seller differences in choice sensitivity or memory (implementing a loss-attention account) generally fared worst. In a new experiment involving incentivized decisions (Study 2), models assuming buyer-seller differences in both outcome sensitivity (as proposed by a loss-aversion account) and response bias performed best. In both Study 1 and 2, the models implemented in cumulative prospect theory performed best. Model recovery studies validated our cognitive modeling approach, showing that the models can be distinguished rather well. In summary, our analysis supports a loss-aversion account of the endowment effect, but also reveals a substantial contribution of simple response bias.
Kõrge, Kristina; Berndt, Nadine; Hohmann, Juergen; Romano, Florence; Hiligsmann, Mickael
2017-01-01
The health technology assessment (HTA) Core Model® is a tool for defining and standardizing the elements of HTA analyses within several domains for producing structured reports. This study explored the parallels between the Core Model and a national HTA report. Experiences from various European HTA agencies were also investigated to determine the Core Model's adaptability to national reports. A comparison between a national report on Genetic Counseling, produced by the Cellule d'expertise médicale Luxembourg, and the Core Model was performed to identify parallels in terms of relevant and comparable assessment elements (AEs). Semi-structured interviews with five representatives from European HTA agencies were performed to assess their user experiences with the Core Model. The comparative study revealed that 50 percent of the total number (n = 144) of AEs in the Core Model were relevant for the national report. Of these 144 AEs from the Core Model, 34 (24 percent) were covered in the national report. Some AEs were covered only partly. The interviewees emphasized flexibility in using the Core Model and stated that the most important aspects to be evaluated include characteristics of the disease and technology, clinical effectiveness, economic aspects, and safety. In the present study, the national report covered an acceptable number of AEs of the Core Model. These results need to be interpreted with caution because only one comparison was performed. The Core Model can be used in a flexible manner, applying only those elements that are relevant from the perspective of the technology assessment and specific country context.
Alternate methodologies to experimentally investigate shock initiation properties of explosives
NASA Astrophysics Data System (ADS)
Svingala, Forrest R.; Lee, Richard J.; Sutherland, Gerrit T.; Benjamin, Richard; Boyle, Vincent; Sickels, William; Thompson, Ronnie; Samuels, Phillip J.; Wrobel, Erik; Cornell, Rodger
2017-01-01
Reactive flow models are desired for new explosive formulations early in the development stage. Traditionally, these models are parameterized by carefully-controlled 1-D shock experiments, including gas-gun testing with embedded gauges and wedge testing with explosive plane wave lenses (PWL). These experiments are easy to interpret due to their 1-D nature, but are expensive to perform and cannot be performed at all explosive test facilities. This work investigates alternative methods to probe shock-initiation behavior of new explosives using widely-available pentolite gap test donors and simple time-of-arrival type diagnostics. These experiments can be performed at a low cost at most explosives testing facilities. This allows experimental data to parameterize reactive flow models to be collected much earlier in the development of an explosive formulation. However, the fundamentally 2-D nature of these tests may increase the modeling burden in parameterizing these models and reduce general applicability. Several variations of the so-called modified gap test were investigated and evaluated for suitability as an alternative to established 1-D gas gun and PWL techniques. At least partial agreement with 1-D test methods was observed for the explosives tested, and future work is planned to scope the applicability and limitations of these experimental techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corradini, D.; Rovere, M.; Gallo, P., E-mail: gallop@fis.uniroma3.it
2015-09-21
In a previous study [Gallo et al., Nat. Commun. 5, 5806 (2014)], we have shown an important connection between thermodynamic and dynamical properties of water in the supercritical region. In particular, by analyzing the experimental viscosity and the diffusion coefficient obtained in simulations performed using the TIP4P/2005 model, we have found that the line of response function maxima in the one phase region, the Widom line, is connected to a crossover from a liquid-like to a gas-like behavior of the transport coefficients. This is in agreement with recent experiments concerning the dynamics of supercritical simple fluids. We here show howmore » different popular water models (TIP4P/2005, TIP4P, SPC/E, TIP5P, and TIP3P) perform in reproducing thermodynamic and dynamic experimental properties in the supercritical region. In particular, the comparison with experiments shows that all the analyzed models are able to qualitatively predict the dynamical crossover from a liquid-like to a gas-like behavior upon crossing the Widom line. Some of the models perform better in reproducing the pressure-temperature slope of the Widom line of supercritical water once a rigid shift of the phase diagram is applied to bring the critical points to coincide with the experimental ones.« less
ERIC Educational Resources Information Center
Riegle, Aaron M.; Gerrity, Kevin W.
2011-01-01
The purpose of this study was to determine the pitch-matching ability of high school choral students. Years of piano experience, middle school performance experience, and model were considered as variables that might affect pitch-matching ability. Gender of participants was also considered when identifying the effectiveness of each model.…
ERIC Educational Resources Information Center
Anderson, G. Ernest, Jr.
The mission of the simulation team of the Model Elementary Teacher Education Project, 1968-71, was to develop simulation tools and conduct appropriate studies of the anticipated operation of that project. The team focused on the experiences of individual students and on the resources necessary for these experiences to be reasonable. This report…
Kehres, Jan; Pedersen, Thomas; Masini, Federico; Andreasen, Jens Wenzel; Nielsen, Martin Meedom; Diaz, Ana; Nielsen, Jane Hvolbæk; Hansen, Ole
2016-01-01
The design, fabrication and performance of a novel and highly sensitive micro-reactor device for performing in situ grazing-incidence X-ray scattering experiments of model catalyst systems is presented. The design of the reaction chamber, etched in silicon on insulator (SIO), permits grazing-incidence small-angle X-ray scattering (GISAXS) in transmission through 10 µm-thick entrance and exit windows by using micro-focused beams. An additional thinning of the Pyrex glass reactor lid allows simultaneous acquisition of the grazing-incidence wide-angle X-ray scattering (GIWAXS). In situ experiments at synchrotron facilities are performed utilizing the micro-reactor and a designed transportable gas feed and analysis system. The feasibility of simultaneous in situ GISAXS/GIWAXS experiments in the novel micro-reactor flow cell was confirmed with CO oxidation over mass-selected Ru nanoparticles. PMID:26917133
Some advances in experimentation supporting development of viscoplastic constitutive models
NASA Technical Reports Server (NTRS)
Ellis, J. R.; Robinson, D. N.
1985-01-01
The development of a biaxial extensometer capable of measuring axial, torsion, and diametral strains to near-microstrain resolution at elevated temperatures is discussed. An instrument with this capability was needed to provide experimental support to the development of viscoplastic constitutive models. The advantages gained when torsional loading is used to investigate inelastic material response at elevated temperatures are highlighted. The development of the biaxial extensometer was conducted in two stages. The first involved a series of bench calibration experiments performed at room temperature. The second stage involved a series of in-place calibration experiments performed at room temperature. A review of the calibration data indicated that all performance requirements regarding resolution, range, stability, and crosstalk had been met by the subject instrument over the temperature range of interest, 21 C to 651 C. The scope of the in-placed calibration experiments was expanded to investigate the feasibility of generating stress relaxation data under torsional loading.
Espinosa, G; Rodríguez, R; Gil, J M; Suzuki-Vidal, F; Lebedev, S V; Ciardi, A; Rubiano, J G; Martel, P
2017-03-01
Numerical simulations of laboratory astrophysics experiments on plasma flows require plasma microscopic properties that are obtained by means of an atomic kinetic model. This fact implies a careful choice of the most suitable model for the experiment under analysis. Otherwise, the calculations could lead to inaccurate results and inappropriate conclusions. First, a study of the validity of the local thermodynamic equilibrium in the calculation of the average ionization, mean radiative properties, and cooling times of argon plasmas in a range of plasma conditions of interest in laboratory astrophysics experiments on radiative shocks is performed in this work. In the second part, we have made an analysis of the influence of the atomic kinetic model used to calculate plasma microscopic properties of experiments carried out on magpie on radiative bow shocks propagating in argon. The models considered were developed assuming both local and nonlocal thermodynamic equilibrium and, for the latter situation, we have considered in the kinetic model different effects such as external radiation field and plasma mixture. The microscopic properties studied were the average ionization, the charge state distributions, the monochromatic opacities and emissivities, the Planck mean opacity, and the radiative power loss. The microscopic study was made as a postprocess of a radiative-hydrodynamic simulation of the experiment. We have also performed a theoretical analysis of the influence of these atomic kinetic models in the criteria for the onset possibility of thermal instabilities due to radiative cooling in those experiments in which small structures were experimentally observed in the bow shock that could be due to this kind of instability.
NASA Astrophysics Data System (ADS)
Espinosa, G.; Rodríguez, R.; Gil, J. M.; Suzuki-Vidal, F.; Lebedev, S. V.; Ciardi, A.; Rubiano, J. G.; Martel, P.
2017-03-01
Numerical simulations of laboratory astrophysics experiments on plasma flows require plasma microscopic properties that are obtained by means of an atomic kinetic model. This fact implies a careful choice of the most suitable model for the experiment under analysis. Otherwise, the calculations could lead to inaccurate results and inappropriate conclusions. First, a study of the validity of the local thermodynamic equilibrium in the calculation of the average ionization, mean radiative properties, and cooling times of argon plasmas in a range of plasma conditions of interest in laboratory astrophysics experiments on radiative shocks is performed in this work. In the second part, we have made an analysis of the influence of the atomic kinetic model used to calculate plasma microscopic properties of experiments carried out on magpie on radiative bow shocks propagating in argon. The models considered were developed assuming both local and nonlocal thermodynamic equilibrium and, for the latter situation, we have considered in the kinetic model different effects such as external radiation field and plasma mixture. The microscopic properties studied were the average ionization, the charge state distributions, the monochromatic opacities and emissivities, the Planck mean opacity, and the radiative power loss. The microscopic study was made as a postprocess of a radiative-hydrodynamic simulation of the experiment. We have also performed a theoretical analysis of the influence of these atomic kinetic models in the criteria for the onset possibility of thermal instabilities due to radiative cooling in those experiments in which small structures were experimentally observed in the bow shock that could be due to this kind of instability.
McCauley, Peter; Kalachev, Leonid V; Mollicone, Daniel J; Banks, Siobhan; Dinges, David F; Van Dongen, Hans P A
2013-12-01
Recent experimental observations and theoretical advances have indicated that the homeostatic equilibrium for sleep/wake regulation--and thereby sensitivity to neurobehavioral impairment from sleep loss--is modulated by prior sleep/wake history. This phenomenon was predicted by a biomathematical model developed to explain changes in neurobehavioral performance across days in laboratory studies of total sleep deprivation and sustained sleep restriction. The present paper focuses on the dynamics of neurobehavioral performance within days in this biomathematical model of fatigue. Without increasing the number of model parameters, the model was updated by incorporating time-dependence in the amplitude of the circadian modulation of performance. The updated model was calibrated using a large dataset from three laboratory experiments on psychomotor vigilance test (PVT) performance, under conditions of sleep loss and circadian misalignment; and validated using another large dataset from three different laboratory experiments. The time-dependence of circadian amplitude resulted in improved goodness-of-fit in night shift schedules, nap sleep scenarios, and recovery from prior sleep loss. The updated model predicts that the homeostatic equilibrium for sleep/wake regulation--and thus sensitivity to sleep loss--depends not only on the duration but also on the circadian timing of prior sleep. This novel theoretical insight has important implications for predicting operator alertness during work schedules involving circadian misalignment such as night shift work.
Modeling of detachment experiments at DIII-D
Canik, John M.; Briesemeister, Alexis R.; Lasnier, C. J.; ...
2014-11-26
Edge fluid–plasma/kinetic–neutral modeling of well-diagnosed DIII-D experiments is performed in order to document in detail how well certain aspects of experimental measurements are reproduced within the model as the transition to detachment is approached. Results indicate, that at high densities near detachment onset, the poloidal temperature profile produced in the simulations agrees well with that measured in experiment. However, matching the heat flux in the model requires a significant increase in the radiated power compared to what is predicted using standard chemical sputtering rates. Lastly, these results suggest that the model is adequate to predict the divertor temperature, provided thatmore » the discrepancy in radiated power level can be resolved.« less
Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen‐You; Schneidman‐Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez‐Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Rie Lee, Gyu; Seok, Chaok; Qin, Sanbo; Zhou, Huan‐Xiang; Ritchie, David W.; Maigret, Bernard; Devignes, Marie‐Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaël A.G.; Bates, Paul A.; Ben‐Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S.; Weng, Zhiping; Vreven, Thom; Pierce, Brian G.; Borrman, Tyler M.; Yu, Jinchao; Ochsenbein, Françoise; Guerois, Raphaël; Vangone, Anna; Rodrigues, João P.G.L.M.; van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S.J.; Visscher, Koen; Kastritis, Panagiotis L.; Bonvin, Alexandre M.J.J.; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X.; Kim, Hyung‐Rae; Roy, Amit; Han, Xusi; Esquivel‐Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J.; Fuller, Jonathan C.; Wade, Rebecca C.; Anishchenko, Ivan; Kundrotas, Petras J.; Vakser, Ilya A.; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero‐Durana, Miguel; Jiménez‐García, Brian; Moal, Iain H.; Férnandez‐Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R.; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A.; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S.; Gray, Jeffrey J.; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey
2016-01-01
ABSTRACT We present the results for CAPRI Round 30, the first joint CASP‐CAPRI experiment, which brought together experts from the protein structure prediction and protein–protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact‐sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology‐built subunit models and the smaller pair‐wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. Proteins 2016; 84(Suppl 1):323–348. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. PMID:27122118
NASA Astrophysics Data System (ADS)
An, Soyoung; Choi, Woochul; Paik, Se-Bum
2015-11-01
Understanding the mechanism of information processing in the human brain remains a unique challenge because the nonlinear interactions between the neurons in the network are extremely complex and because controlling every relevant parameter during an experiment is difficult. Therefore, a simulation using simplified computational models may be an effective approach. In the present study, we developed a general model of neural networks that can simulate nonlinear activity patterns in the hierarchical structure of a neural network system. To test our model, we first examined whether our simulation could match the previously-observed nonlinear features of neural activity patterns. Next, we performed a psychophysics experiment for a simple visual working memory task to evaluate whether the model could predict the performance of human subjects. Our studies show that the model is capable of reproducing the relationship between memory load and performance and may contribute, in part, to our understanding of how the structure of neural circuits can determine the nonlinear neural activity patterns in the human brain.
Identification of human operator performance models utilizing time series analysis
NASA Technical Reports Server (NTRS)
Holden, F. M.; Shinners, S. M.
1973-01-01
The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.
Physician groups' use of data from patient experience surveys.
Friedberg, Mark W; SteelFisher, Gillian K; Karp, Melinda; Schneider, Eric C
2011-05-01
In Massachusetts, physician groups' performance on validated surveys of patient experience has been publicly reported since 2006. Groups also receive detailed reports of their own performance, but little is known about how physician groups have responded to these reports. To examine whether and how physician groups are using patient experience data to improve patient care. During 2008, we conducted semi-structured interviews with the leaders of 72 participating physician groups (out of 117 groups receiving patient experience reports). Based on leaders' responses, we identified three levels of engagement with patient experience reporting: no efforts to improve (level 1), efforts to improve only the performance of low-scoring physicians or practice sites (level 2), and efforts to improve group-wide performance (level 3). Groups' level of engagement and specific efforts to improve patient care. Forty-four group leaders (61%) reported group-wide improvement efforts (level 3), 16 (22%) reported efforts to improve only the performance of low-scoring physicians or practice sites (level 2), and 12 (17%) reported no performance improvement efforts (level 1). Level 3 groups were more likely than others to have an integrated medical group organizational model (84% vs. 31% at level 2 and 33% at level 1; P < 0.005) and to employ the majority of their physicians (69% vs. 25% and 20%; P < 0.05). Among level 3 groups, the most common targets for improvement were access, communication with patients, and customer service. The most commonly reported improvement initiatives were changing office workflow, providing additional training for nonclinical staff, and adopting or enhancing an electronic health record. Despite statewide public reporting, physician groups' use of patient experience data varied widely. Integrated organizational models were associated with greater engagement, and efforts to enhance clinicians' interpersonal skills were uncommon, with groups predominantly focusing on office workflow and support staff.
Tian, Yuxi; Schuemie, Martijn J; Suchard, Marc A
2018-06-22
Propensity score adjustment is a popular approach for confounding control in observational studies. Reliable frameworks are needed to determine relative propensity score performance in large-scale studies, and to establish optimal propensity score model selection methods. We detail a propensity score evaluation framework that includes synthetic and real-world data experiments. Our synthetic experimental design extends the 'plasmode' framework and simulates survival data under known effect sizes, and our real-world experiments use a set of negative control outcomes with presumed null effect sizes. In reproductions of two published cohort studies, we compare two propensity score estimation methods that contrast in their model selection approach: L1-regularized regression that conducts a penalized likelihood regression, and the 'high-dimensional propensity score' (hdPS) that employs a univariate covariate screen. We evaluate methods on a range of outcome-dependent and outcome-independent metrics. L1-regularization propensity score methods achieve superior model fit, covariate balance and negative control bias reduction compared with the hdPS. Simulation results are mixed and fluctuate with simulation parameters, revealing a limitation of simulation under the proportional hazards framework. Including regularization with the hdPS reduces commonly reported non-convergence issues but has little effect on propensity score performance. L1-regularization incorporates all covariates simultaneously into the propensity score model and offers propensity score performance superior to the hdPS marginal screen.
NASA Technical Reports Server (NTRS)
Bulfin, R. L.; Perdue, C. A.
1994-01-01
The Mission Planning Division of the Mission Operations Laboratory at NASA's Marshall Space Flight Center is responsible for scheduling experiment activities for space missions controlled at MSFC. In order to draw statistically relevant conclusions, all experiments must be scheduled at least once and may have repeated performances during the mission. An experiment consists of a series of steps which, when performed, provide results pertinent to the experiment's functional objective. Since these experiments require a set of resources such as crew and power, the task of creating a timeline of experiment activities for the mission is one of resource constrained scheduling. For each experiment, a computer model with detailed information of the steps involved in running the experiment, including crew requirements, processing times, and resource requirements is created. These models are then loaded into the Experiment Scheduling Program (ESP) which attempts to create a schedule which satisfies all resource constraints. ESP uses a depth-first search technique to place each experiment into a time interval, and a scoring function to evaluate the schedule. The mission planners generate several schedules and choose one with a high value of the scoring function to send through the approval process. The process of approving a mission timeline can take several months. Each timeline must meet the requirements of the scientists, the crew, and various engineering departments as well as enforce all resource restrictions. No single objective is considered in creating a timeline. The experiment scheduling problem is: given a set of experiments, place each experiment along the mission timeline so that all resource requirements and temporal constraints are met and the timeline is acceptable to all who must approve it. Much work has been done on multicriteria decision making (MCDM). When there are two criteria, schedules which perform well with respect to one criterion will often perform poorly with respect to the other. One schedule dominates another if it performs strictly better on one criterion, and no worse on the other. Clearly, dominated schedules are undesireable. A nondominated schedule can be generated by some sort of optimization problem. Generally there are two approaches: the first is a hierarchical approach while the second requires optimizing a weighting or scoring function.
Interrelationship of Knowledge, Interest, and Recall: Assessing a Model of Domain Learning.
ERIC Educational Resources Information Center
Alexander, Patricia A.; And Others
1995-01-01
Two experiments involving 125 college and graduate students examined the interrelationship of subject-matter knowledge, interest, and recall in the field of human immunology and biology and assessed cross-domain performance in physics. Patterns of knowledge, interest, and performance fit well with the premises of the Model of Domain Learning. (SLD)
ERIC Educational Resources Information Center
Aldaco, Adrienne L. Gratten
2016-01-01
Chronically low performing schools in the United States have required targeted support and interventions to increase student achievement. In recent years, the school turnaround model has emerged as a swift, dramatic, comprehensive approach to implementing interventions in the lowest performing schools (Calkins, Guenther, Belfiore, & Lash,…
A comprehensive combustion model for biodiesel-fueled engine simulations
NASA Astrophysics Data System (ADS)
Brakora, Jessica L.
Engine models for alternative fuels are available, but few are comprehensive, well-validated models that include accurate physical property data as well as a detailed description of the fuel chemistry. In this work, a comprehensive biodiesel combustion model was created for use in multi-dimensional engine simulations, specifically the KIVA3v R2 code. The model incorporates realistic physical properties in a vaporization model developed for multi-component fuel sprays and applies an improved mechanism for biodiesel combustion chemistry. A reduced mechanism was generated from the methyl decanoate (MD) and methyl-9-decenoate (MD9D) mechanism developed at Lawrence Livermore National Laboratory. It was combined with a multi-component mechanism to include n-heptane in the fuel chemistry. The biodiesel chemistry was represented using a combination of MD, MD9D and n-heptane, which varied for a given fuel source. The reduced mechanism, which contained 63 species, accurately predicted ignition delay times of the detailed mechanism over a range of engine-specific operating conditions. Physical property data for the five methyl ester components of biodiesel were added to the KIVA library. Spray simulations were performed to ensure that the models adequately reproduce liquid penetration observed in biodiesel spray experiments. Fuel composition impacted liquid length as expected, with saturated species vaporizing more and penetrating less. Distillation curves were created to ensure the fuel vaporization process was comparable to available data. Engine validation was performed against a low-speed, high-load, conventional combustion experiments and the model was able to predict the performance and NOx formation seen in the experiment. High-speed, low-load, low-temperature combustion conditions were also modeled, and the emissions (HC, CO, NOx) and fuel consumption were well-predicted for a sweep of injection timings. Finally, comparisons were made between the results of biodiesel composition (palm vs. soy) and fuel blends (neat vs. B20). The model effectively reproduced the trends observed in the experiments.
Du, Dongping; Yang, Hui; Ednie, Andrew R; Bennett, Eric S
2016-09-01
Glycan structures account for up to 35% of the mass of cardiac sodium ( Nav ) channels. To question whether and how reduced sialylation affects Nav activity and cardiac electrical signaling, we conducted a series of in vitro experiments on ventricular apex myocytes under two different glycosylation conditions, reduced protein sialylation (ST3Gal4(-/-)) and full glycosylation (control). Although aberrant electrical signaling is observed in reduced sialylation, realizing a better understanding of mechanistic details of pathological variations in INa and AP is difficult without performing in silico studies. However, computer model of Nav channels and cardiac myocytes involves greater levels of complexity, e.g., high-dimensional parameter space, nonlinear and nonconvex equations. Traditional linear and nonlinear optimization methods have encountered many difficulties for model calibration. This paper presents a new statistical metamodeling approach for efficient computer experiments and optimization of Nav models. First, we utilize a fractional factorial design to identify control variables from the large set of model parameters, thereby reducing the dimensionality of parametric space. Further, we develop the Gaussian process model as a surrogate of expensive and time-consuming computer models and then identify the next best design point that yields the maximal probability of improvement. This process iterates until convergence, and the performance is evaluated and validated with real-world experimental data. Experimental results show the proposed algorithm achieves superior performance in modeling the kinetics of Nav channels under a variety of glycosylation conditions. As a result, in silico models provide a better understanding of glyco-altered mechanistic details in state transitions and distributions of Nav channels. Notably, ST3Gal4(-/-) myocytes are shown to have higher probabilities accumulated in intermediate inactivation during the repolarization and yield a shorter refractory period than WTs. The proposed statistical design of computer experiments is generally extensible to many other disciplines that involve large-scale and computationally expensive models.
Telerobotic system performance measurement - Motivation and methods
NASA Technical Reports Server (NTRS)
Kondraske, George V.; Khoury, George J.
1992-01-01
A systems performance-based strategy for modeling and conducting experiments relevant to the design and performance characterization of telerobotic systems is described. A developmental testbed consisting of a distributed telerobotics network and initial efforts to implement the strategy described is presented. Consideration is given to the general systems performance theory (GSPT) to tackle human performance problems as a basis for: measurement of overall telerobotic system (TRS) performance; task decomposition; development of a generic TRS model; and the characterization of performance of subsystems comprising the generic model. GSPT employs a resource construct to model performance and resource economic principles to govern the interface of systems to tasks. It provides a comprehensive modeling/measurement strategy applicable to complex systems including both human and artificial components. Application is presented within the framework of a distributed telerobotics network as a testbed. Insight into the design of test protocols which elicit application-independent data is described.
Predictions of Cockpit Simulator Experimental Outcome Using System Models
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Goka, T.
1984-01-01
This study involved predicting the outcome of a cockpit simulator experiment where pilots used cockpit displays of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. The experiments were run on the NASA Ames Research Center multicab cockpit simulator facility. Prior to the experiments, a mathematical model of the pilot/aircraft/CDTI flight system was developed which included relative in-trail and vertical dynamics between aircraft in the approach string. This model was used to construct a digital simulation of the string dynamics including response to initial position errors. The model was then used to predict the outcome of the in-trail following cockpit simulator experiments. Outcome included performance and sensitivity to different separation criteria. The experimental results were then used to evaluate the model and its prediction accuracy. Lessons learned in this modeling and prediction study are noted.
Assimilative modeling of low latitude ionosphere
NASA Technical Reports Server (NTRS)
Pi, Xiaoqing; Wang, Chunining; Hajj, George A.; Rosen, I. Gary; Wilson, Brian D.; Mannucci, Anthony J.
2004-01-01
In this paper we present an observation system simulation experiment for modeling low-latitude ionosphere using a 3-dimensional (3-D) global assimilative ionospheric model (GAIM). The experiment is conducted to test the effectiveness of GAIM with a 4-D variational approach (4DVAR) in estimation of the ExB drift and thermospheric wind in the magnetic meridional planes simultaneously for all longitude or local time sectors. The operational Global Positioning System (GPS) satellites and the ground-based global GPS receiver network of the International GPS Service are used in the experiment as the data assimilation source. 'The optimization of the ionospheric state (electron density) modeling is performed through a nonlinear least-squares minimization process that adjusts the dynamical forces to reduce the difference between the modeled and observed slant total electron content in the entire modeled region. The present experiment for multiple force estimations reinforces our previous assessment made through single driver estimations conducted for the ExB drift only.
Defraeye, Thijs; Blocken, Bert; Koninckx, Erwin; Hespel, Peter; Carmeliet, Jan
2010-08-26
This study aims at assessing the accuracy of computational fluid dynamics (CFD) for applications in sports aerodynamics, for example for drag predictions of swimmers, cyclists or skiers, by evaluating the applied numerical modelling techniques by means of detailed validation experiments. In this study, a wind-tunnel experiment on a scale model of a cyclist (scale 1:2) is presented. Apart from three-component forces and moments, also high-resolution surface pressure measurements on the scale model's surface, i.e. at 115 locations, are performed to provide detailed information on the flow field. These data are used to compare the performance of different turbulence-modelling techniques, such as steady Reynolds-averaged Navier-Stokes (RANS), with several k-epsilon and k-omega turbulence models, and unsteady large-eddy simulation (LES), and also boundary-layer modelling techniques, namely wall functions and low-Reynolds number modelling (LRNM). The commercial CFD code Fluent 6.3 is used for the simulations. The RANS shear-stress transport (SST) k-omega model shows the best overall performance, followed by the more computationally expensive LES. Furthermore, LRNM is clearly preferred over wall functions to model the boundary layer. This study showed that there are more accurate alternatives for evaluating flow around bluff bodies with CFD than the standard k-epsilon model combined with wall functions, which is often used in CFD studies in sports. 2010 Elsevier Ltd. All rights reserved.
Xie, Weizhen; Zhang, Weiwei
2017-11-01
The present study dissociated the number (i.e., quantity) and precision (i.e., quality) of visual short-term memory (STM) representations in change detection using receiver operating characteristic (ROC) and experimental manipulations. Across three experiments, participants performed both recognition and recall tests of visual STM using the change-detection task and the continuous color-wheel recall task, respectively. Experiment 1 demonstrated that the estimates of the number and precision of visual STM representations based on the ROC model of change-detection performance were robustly correlated with the corresponding estimates based on the mixture model of continuous-recall performance. Experiments 2 and 3 showed that the experimental manipulation of mnemonic precision using white-noise masking and the experimental manipulation of the number of encoded STM representations using consolidation masking produced selective effects on the corresponding measures of mnemonic precision and the number of encoded STM representations, respectively, in both change-detection and continuous-recall tasks. Altogether, using the individual-differences (Experiment 1) and experimental dissociation (Experiment 2 and 3) approaches, the present study demonstrated the some-or-none nature of visual STM representations across recall and recognition.
Numerical Modelling of Solitary Wave Experiments on Rubble Mound Breakwaters
NASA Astrophysics Data System (ADS)
Guler, H. G.; Arikawa, T.; Baykal, C.; Yalciner, A. C.
2016-12-01
Performance of a rubble mound breakwater protecting Haydarpasa Port, Turkey, has been tested under tsunami attack by physical model tests conducted at Port and Airport Research Institute (Guler et al, 2015). It is aimed to understand dynamic force of the tsunami by conducting solitary wave tests (Arikawa, 2015). In this study, the main objective is to perform numerical modelling of solitary wave tests in order to verify accuracy of the CFD model IHFOAM, developed in OpenFOAM environment (Higuera et al, 2013), by comparing results of the numerical computations with the experimental results. IHFOAM is the numerical modelling tool which is based on VARANS equations with a k-ω SST turbulence model including realistic wave generation, and active wave absorption. Experiments are performed using a Froude scale of 1/30, measuring surface elevation and flow velocity at several locations in the wave channel, and wave pressure around the crown wall of the breakwater. Solitary wave tests with wave heights of H=7.5 cm and H=10 cm are selected which represent the results of the experiments. The first test (H=7.5 cm) is the case that resulted in no damage whereas the second case (H=10 cm) resulted in total damage due to the sliding of the crown wall. After comparison of the preliminary results of numerical simulations with experimental data for both cases, it is observed that solitary wave experiments could be accurately modeled using IHFOAM focusing water surface elevations, flow velocities, and wave pressures on the crown wall of the breakwater (Figure, result of sim. at t=29.6 sec). ACKNOWLEDGEMENTSThe authors acknowledge developers of IHFOAM, further extend their acknowledgements for the partial supports from the research projects MarDiM, ASTARTE, RAPSODI, and TUBITAK 213M534. REFERENCESArikawa (2015) "Consideration of Characteristics of Pressure on Seawall by Solitary Waves Based on Hydraulic Experiments", Jour. of Japan. Soc. of Civ. Eng. Ser. B2 (Coast. Eng.), Vol 71, p I889-I894 Guler, Arikawa, Oei, Yalciner (2015) "Performance of Rubble Mound Breakwaters under Tsunami Attack, A Case Study: Haydarpasa Port, Istanbul, Turkey", Coast. Eng. 104, 43-53 Higuera, Lara, Losada (2013) "Realistic Wave Generation and Active Wave Absorption for Navier-Stokes Models, Application to OpenFOAM", Coast. Eng. 71, 102-118
A control method for bilateral teleoperating systems
NASA Astrophysics Data System (ADS)
Strassberg, Yesayahu
1992-01-01
The thesis focuses on control of bilateral master-slave teleoperators. The bilateral control issue of teleoperators is studied and a new scheme that overcomes basic unsolved problems is proposed. A performance measure, based on the multiport modeling method, is introduced in order to evaluate and understand the limitations of earlier published bilateral control laws. Based on the study evaluating the different methods, the objective of the thesis is stated. The proposed control law is then introduced, its ideal performance is demonstrated, and conditions for stability and robustness are derived. It is shown that stability, desired performance, and robustness can be obtained under the assumption that the deviation of the model from the actual system satisfies certain norm inequalities and the measurement uncertainties are bounded. The proposed scheme is validated by numerical simulation. The simulated system is based on the configuration of the RAL (Robotics and Automation Laboratory) telerobot. From the simulation results it is shown that good tracking performance can be obtained. In order to verify the performance of the proposed scheme when applied to a real hardware system, an experimental setup of a three degree of freedom master-slave teleoperator (i.e. three degree of freedom master and three degree of freedom slave robot) was built. Three basic experiments were conducted to verify the performance of the proposed control scheme. The first experiment verified the master control law and its contribution to the robustness and performance of the entire system. The second experiment demonstrated the actual performance of the system while performing a free motion teleoperating task. From the experimental results, it is shown that the control law has good performance and is robust to uncertainties in the models of the master and slave.
NASA Astrophysics Data System (ADS)
Harshan, S.; Roth, M.; Velasco, E.
2014-12-01
Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.
The synergy of modeling and novel experiments for melt crystal growth research
NASA Astrophysics Data System (ADS)
Derby, Jeffrey J.
2018-05-01
Computational modeling and novel experiments, when performed together, can enable the identification of new, fundamental mechanisms important for the growth of bulk crystals from the melt. In this paper, we present a compelling example of this synergy via the discovery of previously unascertained physical mechanisms that govern the engulfment of silicon carbide particles during the growth of crystalline silicon.
Cognitive Architecture with Evolutionary Dynamics Solves Insight Problem.
Fedor, Anna; Zachar, István; Szilágyi, András; Öllinger, Michael; de Vladar, Harold P; Szathmáry, Eörs
2017-01-01
In this paper, we show that a neurally implemented a cognitive architecture with evolutionary dynamics can solve the four-tree problem. Our model, called Darwinian Neurodynamics, assumes that the unconscious mechanism of problem solving during insight tasks is a Darwinian process. It is based on the evolution of patterns that represent candidate solutions to a problem, and are stored and reproduced by a population of attractor networks. In our first experiment, we used human data as a benchmark and showed that the model behaves comparably to humans: it shows an improvement in performance if it is pretrained and primed appropriately, just like human participants in Kershaw et al. (2013)'s experiment. In the second experiment, we further investigated the effects of pretraining and priming in a two-by-two design and found a beginner's luck type of effect: solution rate was highest in the condition that was primed, but not pretrained with patterns relevant for the task. In the third experiment, we showed that deficits in computational capacity and learning abilities decreased the performance of the model, as expected. We conclude that Darwinian Neurodynamics is a promising model of human problem solving that deserves further investigation.
Cognitive Architecture with Evolutionary Dynamics Solves Insight Problem
Fedor, Anna; Zachar, István; Szilágyi, András; Öllinger, Michael; de Vladar, Harold P.; Szathmáry, Eörs
2017-01-01
In this paper, we show that a neurally implemented a cognitive architecture with evolutionary dynamics can solve the four-tree problem. Our model, called Darwinian Neurodynamics, assumes that the unconscious mechanism of problem solving during insight tasks is a Darwinian process. It is based on the evolution of patterns that represent candidate solutions to a problem, and are stored and reproduced by a population of attractor networks. In our first experiment, we used human data as a benchmark and showed that the model behaves comparably to humans: it shows an improvement in performance if it is pretrained and primed appropriately, just like human participants in Kershaw et al. (2013)'s experiment. In the second experiment, we further investigated the effects of pretraining and priming in a two-by-two design and found a beginner's luck type of effect: solution rate was highest in the condition that was primed, but not pretrained with patterns relevant for the task. In the third experiment, we showed that deficits in computational capacity and learning abilities decreased the performance of the model, as expected. We conclude that Darwinian Neurodynamics is a promising model of human problem solving that deserves further investigation. PMID:28405191
NASA Astrophysics Data System (ADS)
Menicucci, D. F.
The performance of a photovoltaic (PV) system is affected by the particular mounting configuration selected. But the optimal configuration for various potential designs is unknown because too few PV systems have been fielded. Sandia National Laboratories (SNLA) is currently conducting a controlled field experiment in which four of the most commonly used module mounting configurations are being compared. The data from the experiment are used to verify the accuracy of PVFORM, a new PV performance model. The model is then used to simulate the performance of PV modules mounted in different configurations in eight sites throughtout the U.S. The module mounting configurations, the experimental methods used, the specialized statistical techniques used in the analysis and the final results of the effort are described. The module mounting configurations are rank ordered at each site according to their energy production performane and each is briefly discussed in terms of its advantages or disadvantages in various applications.
Regret causes ego-depletion and finding benefits in the regrettable events alleviates ego-depletion.
Gao, Hongmei; Zhang, Yan; Wang, Fang; Xu, Yan; Hong, Ying-Yi; Jiang, Jiang
2014-01-01
This study tested the hypotheses that experiencing regret would result in ego-depletion, while finding benefits (i.e., "silver linings") in the regret-eliciting events counteracted the ego-depletion effect. Using a modified gambling paradigm (Experiments 1, 2, and 4) and a retrospective method (Experiments 3 and 5), five experiments were conducted to induce regret. Results revealed that experiencing regret undermined performance on subsequent tasks, including a paper-and-pencil calculation task (Experiment 1), a Stroop task (Experiment 2), and a mental arithmetic task (Experiment 3). Furthermore, finding benefits in the regret-eliciting events improved subsequent performance (Experiments 4 and 5), and this improvement was mediated by participants' perceived vitality (Experiment 4). This study extended the depletion model of self-regulation by considering emotions with self-conscious components (in our case, regret). Moreover, it provided a comprehensive understanding of how people felt and performed after experiencing regret and after finding benefits in the events that caused the regret.
Trempe, Maxime; Sabourin, Maxime; Rohbanfard, Hassan; Proteau, Luc
2011-03-01
Motor learning is a process that extends beyond training sessions. Specifically, physical practice triggers a series of physiological changes in the CNS that are regrouped under the term "consolidation" (Stickgold and Walker 2007). These changes can result in between-session improvement or performance stabilization (Walker 2005). In a series of three experiments, we tested whether consolidation also occurs following observation. In Experiment 1, participants observed an expert model perform a sequence of arm movements. Although we found evidence of observation learning, no significant difference was revealed between participants asked to reproduce the observed sequence either 5 min or 24 h later (no between-session improvement). In Experiment 2, two groups of participants observed an expert model perform two distinct movement sequences (A and B) either 10 min or 8 h apart; participants then physically performed both sequences after a 24-h break. Participants in the 8-h group performed Sequence B less accurately compared to participants in the 5-min group, suggesting that the memory representation of the first sequence had been stabilized and that it interfered with the learning of the second sequence. Finally, in Experiment 3, the initial observation phase was replaced by a physical practice phase. In contrast with the results of Experiment 2, participants in the 8-h group performed Sequence B significantly more accurately compared to participants in the 5-min group. Together, our results suggest that the memory representation of a skill learned through observation undergoes consolidation. However, consolidation of an observed motor skill leads to distinct behavioural outcomes in comparison with physical practice.
A three-dimensional finite element model of near-field scanning microwave microscopy
NASA Astrophysics Data System (ADS)
Balusek, Curtis; Friedman, Barry; Luna, Darwin; Oetiker, Brian; Babajanyan, Arsen; Lee, Kiejin
2012-10-01
A three-dimensional finite element model of an experimental near-field scanning microwave microscope (NSMM) has been developed and compared to experiment on non conducting samples. The microwave reflection coefficient S11 is calculated as a function of frequency with no adjustable parameters. There is qualitative agreement with experiment in that the resonant frequency can show a sizable increase with sample dielectric constant; a result that is not obtained with a two-dimensional model. The most realistic model shows a semi-quantitative agreement with experiment. The effect of different sample thicknesses and varying tip sample distances is investigated numerically and shown to effect NSMM performance in a way consistent with experiment. Visualization of the electric field indicates that the field is primarily determined by the shape of the coupling hooks.
Fretting Fatigue of Single Crystal/Polycrystalline Nickel Subjected to Blade/Disk Contact Loading
NASA Astrophysics Data System (ADS)
Matlik, J. F.; Murthy, H.; Farris, T. N.
2002-01-01
Fretting fatigue describes the formation and growth of cracks at the edge-of-contact of nominally clamped components subjected to cyclic loading. Components that are known to be subject to fretting fatigue include riveted lap joints and blade/disk contacts in launch vehicle turbomachinery. Recent efforts have shown that conventional mechanics tools, both fatigue and fracture based, can be used to model fretting fatigue experiments leading to successful life predictions. In particular, experiments involving contact load configurations similar to those that occur in the blade/disk connection of gas turbine engines have been performed extensively. Predictions of fretting fatigue life have been compared favorably to experimental observations [1]. Recent efforts are aimed at performing experiments at higher temperatures as shown in the photograph below along with a sample fracture surface. The talk will describe the status of these experiments as will as model developments relevant to the single crystal material properties.
Guizzo, Francesca; Cadinu, Mara
2017-06-01
Although previous research has demonstrated that objectification impairs female cognitive performance, no research to date has investigated the mechanisms underlying such decrement. Therefore, we tested the role of flow experience as one mechanism leading to performance decrement under sexual objectification. Gaze gender was manipulated by having male versus female experimenters take body pictures of female participants (N = 107) who then performed a Sustained Attention to Response Task. As predicted, a moderated mediation model showed that under male versus female gaze, higher internalization of beauty ideals was associated with lower flow, which in turn decreased performance. The implications of these results are discussed in relation to objectification theory and strategies to prevent sexually objectifying experiences. © 2016 The British Psychological Society.
Uranium Hydride Nucleation and Growth Model FY'16 ESC Annual Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, Mary Ann; Richards, Andrew Walter; Holby, Edward F.
2016-12-20
Uranium hydride corrosion is of great interest to the nuclear industry. Uranium reacts with water and/or hydrogen to form uranium hydride which adversely affects material performance. Hydride nucleation is influenced by thermal history, mechanical defects, oxide thickness, and chemical defects. Information has been gathered from past hydride experiments to formulate a uranium hydride model to be used in a Canned Subassembly (CSA) lifetime prediction model. This multi-scale computer modeling effort started in FY’13, and the fourth generation model is now complete. Additional high-resolution experiments will be run to further test the model.
Analytic Modeling of Pressurization and Cryogenic Propellant Conditions for Lunar Landing Vehicle
NASA Technical Reports Server (NTRS)
Corpening, Jeremy
2010-01-01
This slide presentation reviews the development, validation and application of the model to the Lunar Landing Vehicle. The model named, Computational Propellant and Pressurization Program -- One Dimensional (CPPPO), is used to model in this case cryogenic propellant conditions of the Altair Lunar lander. The validation of CPPPO was accomplished via comparison to an existing analytic model (i.e., ROCETS), flight experiment and ground experiments. The model was used to the Lunar Landing Vehicle perform a parametric analysis on pressurant conditions and to examine the results of unequal tank pressurization and draining for multiple tank designs.
PROBLEMS OF THE OPTICAL MODEL FOR DEUTERONS. I. PARAMETERS OF THE OPTICAL POTENTIAL (in Polish)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grotowski, K.
1963-01-01
S>Problems concerning the optical model are discussed. Some special properties of deuterons as projectiles influence the optical model describing their interaction with nuclei. Several experiments were performed to obtain parameters of the optical model potential. (auth)
LES Modeling of Supersonic Combustion at SCRAMJET Conditions
NASA Astrophysics Data System (ADS)
Vane, Zachary; Lacaze, Guilhem; Oefelein, Joseph
2016-11-01
Results from a series of large-eddy simulations (LES) of the Hypersonic International Flight Research Experiment (HIFiRE) are examined with emphasis placed on the coupled performance of the wall and combustion models. The test case of interest corresponds to the geometry and conditions found in the ground based experiments performed in the HIFiRE Direct Connect Rig (HDCR) in dual-mode operation. In these calculations, the turbulence and mixing characteristics of the high Reynolds number turbulent boundary layer with multi-species fuel injection are analyzed using a simplified chemical model and combustion closure to predict the heat release measured experimentally. These simulations are then used to identify different flame regimes in the combustor section. Concurrently, the performance of an equilibrium wall-model is evaluated in the vicinity of the fuel injectors and in the flame-holding cavity where regions of boundary layer and thermochemical non-equilibrium are present. Support for this research was provided by the Defense Advanced Research Projects Agency (DARPA).
Limitations of contrast enhancement for infrared target identification
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Fanning, Jonathan D.
2009-05-01
Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content. Automatic contrast enhancement techniques do not always achieve this improvement. In some cases, the contrast can increase to a level of target saturation. This paper assesses the range-performance effects of contrast enhancement for target identification as a function of image saturation. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing contrast enhancement processed images at various levels of saturation. Contrast enhancement is modeled in the U.S. Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of specific feature saturation or enhancement. The measured results follow the predicted performance based on the target task difficulty metric used in NVThermIP for the non-saturated cases. The saturated images reduce the information contained in the target and performance suffers. The model treats the contrast of the target as uniform over spatial frequency. As the contrast is enhanced, the model assumes that the contrast is enhanced uniformly over the spatial frequencies. After saturation, the spatial cues that differentiate one tank from another are located in a limited band of spatial frequencies. A frequency dependent treatment of target contrast is needed to predict performance of over-processed images.
Anderson, P. S. L.; Rayfield, E. J.
2012-01-01
Computational models such as finite-element analysis offer biologists a means of exploring the structural mechanics of biological systems that cannot be directly observed. Validated against experimental data, a model can be manipulated to perform virtual experiments, testing variables that are hard to control in physical experiments. The relationship between tooth form and the ability to break down prey is key to understanding the evolution of dentition. Recent experimental work has quantified how tooth shape promotes fracture in biological materials. We present a validated finite-element model derived from physical compression experiments. The model shows close agreement with strain patterns observed in photoelastic test materials and reaction forces measured during these experiments. We use the model to measure strain energy within the test material when different tooth shapes are used. Results show that notched blades deform materials for less strain energy cost than straight blades, giving insights into the energetic relationship between tooth form and prey materials. We identify a hypothetical ‘optimal’ blade angle that minimizes strain energy costs and test alternative prey materials via virtual experiments. Using experimental data and computational models offers an integrative approach to understand the mechanics of tooth morphology. PMID:22399789
Asymptotic Expansion Homogenization for Multiscale Nuclear Fuel Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hales, J. D.; Tonks, M. R.; Chockalingam, K.
2015-03-01
Engineering scale nuclear fuel performance simulations can benefit by utilizing high-fidelity models running at a lower length scale. Lower length-scale models provide a detailed view of the material behavior that is used to determine the average material response at the macroscale. These lower length-scale calculations may provide insight into material behavior where experimental data is sparse or nonexistent. This multiscale approach is especially useful in the nuclear field, since irradiation experiments are difficult and expensive to conduct. The lower length-scale models complement the experiments by influencing the types of experiments required and by reducing the total number of experiments needed.more » This multiscale modeling approach is a central motivation in the development of the BISON-MARMOT fuel performance codes at Idaho National Laboratory. These codes seek to provide more accurate and predictive solutions for nuclear fuel behavior. One critical aspect of multiscale modeling is the ability to extract the relevant information from the lower length-scale sim- ulations. One approach, the asymptotic expansion homogenization (AEH) technique, has proven to be an effective method for determining homogenized material parameters. The AEH technique prescribes a system of equations to solve at the microscale that are used to compute homogenized material constants for use at the engineering scale. In this work, we employ AEH to explore the effect of evolving microstructural thermal conductivity and elastic constants on nuclear fuel performance. We show that the AEH approach fits cleanly into the BISON and MARMOT codes and provides a natural, multidimensional homogenization capability.« less
A neural network controller for automated composite manufacturing
NASA Technical Reports Server (NTRS)
Lichtenwalner, Peter F.
1994-01-01
At McDonnell Douglas Aerospace (MDA), an artificial neural network based control system has been developed and implemented to control laser heating for the fiber placement composite manufacturing process. This neurocontroller learns an approximate inverse model of the process on-line to provide performance that improves with experience and exceeds that of conventional feedback control techniques. When untrained, the control system behaves as a proportional plus integral (PI) controller. However after learning from experience, the neural network feedforward control module provides control signals that greatly improve temperature tracking performance. Faster convergence to new temperature set points and reduced temperature deviation due to changing feed rate have been demonstrated on the machine. A Cerebellar Model Articulation Controller (CMAC) network is used for inverse modeling because of its rapid learning performance. This control system is implemented in an IBM compatible 386 PC with an A/D board interface to the machine.
Focks, Andreas; Belgers, Dick; Boerwinkel, Marie-Claire; Buijse, Laura; Roessink, Ivo; Van den Brink, Paul J
2018-05-01
Exposure patterns in ecotoxicological experiments often do not match the exposure profiles for which a risk assessment needs to be performed. This limitation can be overcome by using toxicokinetic-toxicodynamic (TKTD) models for the prediction of effects under time-variable exposure. For the use of TKTD models in the environmental risk assessment of chemicals, it is required to calibrate and validate the model for specific compound-species combinations. In this study, the survival of macroinvertebrates after exposure to the neonicotinoid insecticide was modelled using TKTD models from the General Unified Threshold models of Survival (GUTS) framework. The models were calibrated on existing survival data from acute or chronic tests under static exposure regime. Validation experiments were performed for two sets of species-compound combinations: one set focussed on multiple species sensitivity to a single compound: imidacloprid, and the other set on the effects of multiple compounds for a single species, i.e., the three neonicotinoid compounds imidacloprid, thiacloprid and thiamethoxam, on the survival of the mayfly Cloeon dipterum. The calibrated models were used to predict survival over time, including uncertainty ranges, for the different time-variable exposure profiles used in the validation experiments. From the comparison between observed and predicted survival, it appeared that the accuracy of the model predictions was acceptable for four of five tested species in the multiple species data set. For compounds such as neonicotinoids, which are known to have the potential to show increased toxicity under prolonged exposure, the calibration and validation of TKTD models for survival needs to be performed ideally by considering calibration data from both acute and chronic tests.
Controlled experiments for dense gas diffusion: Experimental design and execution, model comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Egami, R.; Bowen, J.; Coulombe, W.
1995-07-01
An experimental baseline CO2 release experiment at the DOE Spill Test Facility on the Nevada Test Site in Southern Nevada is described. This experiment was unique in its use of CO2 as a surrogate gas representative of a variety of specific chemicals. Introductory discussion places the experiment in historical perspective. CO2 was selected as a surrogate gas to provide a data base suitable for evaluation of model scenarios involving a variety of specific dense gases. The experiment design and setup are described, including design rationale and quality assurance methods employed. Resulting experimental data are summarized. Data usefulness is examined throughmore » a preliminary comparison of experimental results with simulations performed using the SLAV and DEGADIS dense gas models.« less
Perone, Sammy; Spencer, John P.
2013-01-01
What motivates children to radically transform themselves during early development? We addressed this question in the domain of infant visual exploration. Over the first year, infants' exploration shifts from familiarity to novelty seeking. This shift is delayed in preterm relative to term infants and is stable within individuals over the course of the first year. Laboratory tasks have shed light on the nature of this familiarity-to-novelty shift, but it is not clear what motivates the infant to change her exploratory style. We probed this by letting a Dynamic Neural Field (DNF) model of visual exploration develop itself via accumulating experience in a virtual world. We then situated it in a canonical laboratory task. Much like infants, the model exhibited a familiarity-to-novelty shift. When we manipulated the initial conditions of the model, the model's performance was developmentally delayed much like preterm infants. This delay was overcome by enhancing the model's experience during development. We also found that the model's performance was stable at the level of the individual. Our simulations indicate that novelty seeking emerges with no explicit motivational source via the accumulation of visual experience within a complex, dynamical exploratory system. PMID:24065948
Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.
2012-01-01
We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068
Research on quality metrics of wireless adaptive video streaming
NASA Astrophysics Data System (ADS)
Li, Xuefei
2018-04-01
With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.
Study on bamboo gluing performance numerical simulation
NASA Astrophysics Data System (ADS)
Zhao, Z. R.; Sun, W. H.; Sui, X. M.; Zhang, X. F.
2018-01-01
Bamboo gluing timber is a green building materials, can be widely used as modern building beams and columns. The existing bamboo gluing timber is usually produced by bamboo columns or bamboo bundle rolled into by bamboo columns. The performance of new bamboo gluing timber is decided by bamboo adhesion character. Based on this, the cohesive damage model of bamboo gluing is created, experiment results are used to validate the model. The model proposed in the work is agreed on the experimental results. Different bamboo bonding length and bamboo gluing performance is analysed. The model is helpful to bamboo integrated timber application.
NASA Astrophysics Data System (ADS)
Čufar, Aljaž; Batistoni, Paola; Conroy, Sean; Ghani, Zamir; Lengar, Igor; Milocco, Alberto; Packer, Lee; Pillon, Mario; Popovichev, Sergey; Snoj, Luka; JET Contributors
2017-03-01
At the Joint European Torus (JET) the ex-vessel fission chambers and in-vessel activation detectors are used as the neutron production rate and neutron yield monitors respectively. In order to ensure that these detectors produce accurate measurements they need to be experimentally calibrated. A new calibration of neutron detectors to 14 MeV neutrons, resulting from deuterium-tritium (DT) plasmas, is planned at JET using a compact accelerator based neutron generator (NG) in which a D/T beam impinges on a solid target containing T/D, producing neutrons by DT fusion reactions. This paper presents the analysis that was performed to model the neutron source characteristics in terms of energy spectrum, angle-energy distribution and the effect of the neutron generator geometry. Different codes capable of simulating the accelerator based DT neutron sources are compared and sensitivities to uncertainties in the generator's internal structure analysed. The analysis was performed to support preparation to the experimental measurements performed to characterize the NG as a calibration source. Further extensive neutronics analyses, performed with this model of the NG, will be needed to support the neutron calibration experiments and take into account various differences between the calibration experiment and experiments using the plasma as a source of neutrons.
NASA Astrophysics Data System (ADS)
Nir, A.; Doughty, C.; Tsang, C. F.
Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.
Benchmarking study of the MCNP code against cold critical experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sitaraman, S.
1991-01-01
The purpose of this study was to benchmark the widely used Monte Carlo code MCNP against a set of cold critical experiments with a view to using the code as a means of independently verifying the performance of faster but less accurate Monte Carlo and deterministic codes. The experiments simulated consisted of both fast and thermal criticals as well as fuel in a variety of chemical forms. A standard set of benchmark cold critical experiments was modeled. These included the two fast experiments, GODIVA and JEZEBEL, the TRX metallic uranium thermal experiments, the Babcock and Wilcox oxide and mixed oxidemore » experiments, and the Oak Ridge National Laboratory (ORNL) and Pacific Northwest Laboratory (PNL) nitrate solution experiments. The principal case studied was a small critical experiment that was performed with boiling water reactor bundles.« less
Gas Gun Studies of Interface Wear Effects
NASA Astrophysics Data System (ADS)
Jackson, Tyler; Kennedy, Greg; Thadhani, Naresh
2011-06-01
The characteristics of interface wear were studied by performing gas gun experiments at velocities up to 1 km/s. The approach involved developing coefficients of constitutive strength models for Al 6061 and OFHC-Cu, then using those to design die geometry for interface wear gas gun experiments. Taylor rod-on-anvil impact experiments were performed to obtain coefficients of the Johnson-Cook constitutive strength model by correlating experimentally obtained deformed states of impacted samples with those predicted using ANSYS AUTODYN hydrocode. Simulations were used with validated strength models to design geometry involving acceleration of Al rods through a copper concentric cylindrical angular extrusion die. Experiments were conducted using 7.62 mm and 80 mm diameter gas guns. Differences in the microstructure of the interface layer and microhardness values illustrate that stress-strain conditions produced during acceleration of Al through the hollow concentric copper die, at velocities less than 800 m/s, result in formation of a layer via solid state alloying due to severe plastic deformation, while higher velocities produce an interface layer consisting of melted and re-solidified aluminum.
Pickard, Dawn
2007-01-01
We have developed experiments and materials to model human genetics using rapid cycling Brassica rapa, also known as Fast Plants. Because of their self-incompatibility for pollination and the genetic diversity within strains, B. rapa can serve as a relevant model for human genetics in teaching laboratory experiments. The experiment presented here is a paternity exclusion project in which a child is born with a known mother but two possible alleged fathers. Students use DNA markers (microsatellites) to perform paternity exclusion on these subjects. Realistic DNA marker analysis can be challenging to implement within the limitations of an instructional lab, but we have optimized the experimental methods to work in a teaching lab environment and to maximize the “hands-on” experience for the students. The genetic individuality of each B. rapa plant, revealed by analysis of polymorphic microsatellite markers, means that each time students perform this project, they obtain unique results that foster independent thinking in the process of data interpretation. PMID:17548880
Kwon, Kyu-Sang; Kim, Song-Bae; Choi, Nag-Choul; Kim, Dong-Ju; Lee, Soonjae; Lee, Sang-Hyup; Choi, Jae-Woo
2013-01-01
In this study, the deposition and transport of Pseudomonas aeruginosa on sandy porous materials have been investigated under static and dynamic flow conditions. For the static experiments, both equilibrium and kinetic batch tests were performed at a 1:3 and 3:1 soil:solution ratio. The batch data were analysed to quantify the deposition parameters under static conditions. Column tests were performed for dynamic flow experiments with KCl solution and bacteria suspended in (1) deionized water, (2) mineral salt medium (MSM) and (3) surfactant + MSM. The equilibrium distribution coefficient (K(d)) was larger at a 1:3 (2.43 mL g(-1)) than that at a 3:1 (0.28 mL g(-1)) soil:solution ratio. Kinetic batch experiments showed that the reversible deposition rate coefficient (k(att)) and the release rate coefficient (k(det)) at a soil:solution ratio of 3:1 were larger than those at a 1:3 ratio. Column experiments showed that an increase in ionic strength resulted in a decrease in peak concentration of bacteria, mass recovery and tailing of the bacterial breakthrough curve (BTC) and that the presence of surfactant enhanced the movement of bacteria through quartz sand, giving increased mass recovery and tailing. Deposition parameters under dynamic condition were determined by fitting BTCs to four different transport models, (1) kinetic reversible, (2) two-site, (3) kinetic irreversible and (4) kinetic reversible and irreversible models. Among these models, Model 4 was more suitable than the others since it includes the irreversible sorption term directly related to the mass loss of bacteria observed in the column experiment. Applicability of the parameters obtained from the batch experiments to simulate the column breakthrough data is evaluated.
Parameter learning for performance adaptation
NASA Technical Reports Server (NTRS)
Peek, Mark D.; Antsaklis, Panos J.
1990-01-01
A parameter learning method is introduced and used to broaden the region of operability of the adaptive control system of a flexible space antenna. The learning system guides the selection of control parameters in a process leading to optimal system performance. A grid search procedure is used to estimate an initial set of parameter values. The optimization search procedure uses a variation of the Hooke and Jeeves multidimensional search algorithm. The method is applicable to any system where performance depends on a number of adjustable parameters. A mathematical model is not necessary, as the learning system can be used whenever the performance can be measured via simulation or experiment. The results of two experiments, the transient regulation and the command following experiment, are presented.
Study on the CO2 electric driven fixed swash plate type compressor for eco-friendly vehicles
NASA Astrophysics Data System (ADS)
Nam, Donglim; Kim, Kitae; Lee, Jehie; Kwon, Yunki; Lee, Geonho
2017-08-01
The purpose of this study is to experiment and to performance analysis about the electric-driven fixed swash plate compressor using alternate refrigerant(R744). Comprehensive simulation model for an electric driven compressor using CO2 for eco-friendly vehicle is presented. This model consists of compression model and dynamic model. The compression model included valve dynamics, leakage, and heat transfer models. And the dynamic model included frictional loss between piston ring and cylinder wall, frictional loss between shoe and swash plate, frictional loss of bearings, and electric efficiency. Especially, because the efficiency of an electric parts(motor and inverter) in the compressor affects the loss of the compressor, the dynamo test was performed. We made the designed compressor, and tested the performance of the compressor about the variety pressure conditions. Also we compared the performance analysis result and performance test result.
NASA Astrophysics Data System (ADS)
Mao, Chao; Chen, Shou
2017-01-01
According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.
Flooding Fragility Experiments and Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis L.; Tahhan, Antonio; Muchmore, Cody
2016-09-01
This report describes the work that has been performed on flooding fragility, both the experimental tests being carried out and the probabilistic fragility predictive models being produced in order to use the text results. Flooding experiments involving full-scale doors have commenced in the Portal Evaluation Tank. The goal of these experiments is to develop a full-scale component flooding experiment protocol and to acquire data that can be used to create Bayesian regression models representing the fragility of these components. This work is in support of the Risk-Informed Safety Margin Characterization (RISMC) Pathway external hazards evaluation research and development.
experiments were performed on 7 trichomonas vaginalis strains. The test cultures with serotonin, oestron, testosteron and methyltestosteron grew at...The present study has been concerned with the influence of some hormones upon trichomonas growth. The following substances were used for our...ml onward. Adrenalin and noradrenalin have generally inhibiting action (from 0.80 mg/ml onward) upon trichomonas growth. The antihormone 3.5-dijodtyrosin does rarely influence the growth of trichomonas . (Author)
FEDS - An experiment with a microprocessor-based orbit determination system using TDRS data
NASA Technical Reports Server (NTRS)
Shank, D.; Pajerski, R.
1986-01-01
An experiment in microprocessor-based onboard orbit determination has been conducted at NASA's Goddard Space Flight Center. The experiment collected forward-link observation data in real time from a prototype transponder and performed orbit estimation on a typical low-earth scientific satellite. This paper discusses the hardware and organizational configurations of the experiment, the structure of the onboard software, the mathematical models, and the experiment results.
ERIC Educational Resources Information Center
Smith, Rebekah E.; Bayen, Ute J.
2006-01-01
Event-based prospective memory involves remembering to perform an action in response to a particular future event. Normal younger and older adults performed event-based prospective memory tasks in 2 experiments. The authors applied a formal multinomial processing tree model of prospective memory (Smith & Bayen, 2004) to disentangle age differences…
Improving the performances of autofocus based on adaptive retina-like sampling model
NASA Astrophysics Data System (ADS)
Hao, Qun; Xiao, Yuqing; Cao, Jie; Cheng, Yang; Sun, Ce
2018-03-01
An adaptive retina-like sampling model (ARSM) is proposed to balance autofocusing accuracy and efficiency. Based on the model, we carry out comparative experiments between the proposed method and the traditional method in terms of accuracy, the full width of the half maxima (FWHM) and time consumption. Results show that the performances of our method are better than that of the traditional method. Meanwhile, typical autofocus functions, including sum-modified-Laplacian (SML), Laplacian (LAP), Midfrequency-DCT (MDCT) and Absolute Tenengrad (ATEN) are compared through comparative experiments. The smallest FWHM is obtained by the use of LAP, which is more suitable for evaluating accuracy than other autofocus functions. The autofocus function of MDCT is most suitable to evaluate the real-time ability.
Posttest RELAP4 analysis of LOFT experiment L1-4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grush, W.H.; Holmstrom, H.L.O.
Results of posttest analysis of LOFT loss-of-coolant experiment L1-4 with the RELAP4 code are presented. The results are compared with the pretest prediction and the test data. Differences between the RELAP4 model used for this analysis and that used for the pretest prediction are in the areas of initial conditions, nodalization, emergency core cooling system, broken loop hot leg, and steam generator secondary. In general, these changes made only minor improvement in the comparison of the analytical results to the data. Also presented are the results of a limited study of LOFT downcomer modeling which compared the performance of themore » conventional single downcomer model with that of the new split downcomer model. A RELAP4 sensitivity calculation with artificially elevated emergency core coolant temperature was performed to highlight the need for an ECC mixing model in RELAP4.« less
Dynamic Simulation of a Periodic 10 K Sorption Cryocooler
NASA Technical Reports Server (NTRS)
Bhandari, P.; Rodriguez, J.; Bard, S.; Wade, L.
1994-01-01
A transient thermal simulation model has been developed to simulate the dynamic performance of a multiple-stage 10 K sorption cryocooler for spacecraft sensor cooling applications that require periodic quick-cooldown (under 2 minutes) , negligible vibration, low power consumption, and long life (5 to 10 years). The model was specifically designed to represent the Brilliant Eyes Ten-Kelvin Sorption Cryocooler Experiment (BETSCE), but it can be adapted to represent other sorption cryocooler systems as well. The model simulates the heat transfer, mass transfer, and thermodynamic processes in the cryostat and the sorbent beds for the entire refrigeration cycle, and includes the transient effects of variable hydrogen supply pressures due to expansion and overflow of hydrogen during the cooldown operation. The paper describes model limitations and simplifying assumptions, with estimates of errors induced by them, and presents comparisons of performance predictions with ground experiments. An important benefit of the model is its ability to predict performance sensitivities to variations of key design and operational parameters. The insights thus obtained are expected to lead to higher efficiencies and lower weights for future designs.
A Program for Simulated Thermodynamic Experiments.
ERIC Educational Resources Information Center
Olds, Dan W.
A time-sharing FORTRAN program is described. It was created to allow a student to design and perform classical thermodynamic experiments on three models of a working substance. One goal was to develop a simulation which gave the student maximum freedom and responsibility in the design of the experiment and provided only the primary experimental…
The visual discrimination of bending.
Norman, J Farley; Wiesemann, Elizabeth Y; Norman, Hideko F; Taylor, M Jett; Craft, Warren D
2007-01-01
The sensitivity of observers to nonrigid bending was evaluated in two experiments. In both experiments, observers were required to discriminate on any given trial which of two bending rods was more elastic. In experiment 1, both rods bent within the same oriented plane, and bent either in a frontoparallel plane or bent in depth. In experiment 2, the two rods within any given trial bent in different, randomly chosen orientations in depth. The results of both experiments revealed that human observers are sensitive to, and can reliably detect, relatively small differences in bending (the average Weber fraction across experiments 1 and 2 was 9.0%). The performance of the human observers was compared to that of models that based their elasticity judgments upon either static projected curvature or mean and maximal projected speed. Despite the fact that all of the observers reported compelling 3-D perceptions of bending in depth, their judgments were both qualitatively and quantitatively consistent with the performance of the models. This similarity suggests that relatively straightforward information about the elasticity of simple bending objects is available in projected retinal images.
Sleiman, Z.; Tanos, V.; Van Belle, Y.; Carvalho, J.L.; Campo, R.
2015-01-01
The efficiency of suturing training and testing (SUTT) model by laparoscopy was evaluated, measuring the suturingskill acquisition of trainee gynecologists at the beginning and at the end of a teaching course. During a workshop organized by the European Academy of Gynecological Surgery (EAGS), 25 participants with three different experience levels in laparoscopy (minor, intermediate and major) performed the 4 exercises of the SUTT model (Ex 1: both hands stitching and continuous suturing, Ex 2: right hand stitching and intracorporeal knotting, Ex 3: left hand stitching and intracorporeal knotting, Ex 4: dominant hand stitching, tissue approximation and intracorporeal knotting). The time needed to perform the exercises is recorded for each trainee and group and statistical analysis used to note the differences. Overall, all trainees achieved significant improvement in suturing time (p < 0.005) as measured before and after completion of the training. Similar significantly improved suturing time differences (p < 0.005) were noted among the groups of trainees with different laparoscopic experience. In conclusion a short well-guided training course, using the SUTT model, improves significantly surgeon’s laparoscopic suturing ability, independently of the level of experience in laparoscopic surgery. Key words: Endoscopy, laparoscopic suturing, psychomotor skills, surgery, teaching, training suturing model. PMID:26977264
Is it better to be average? High and low performance as predictors of employee victimization.
Jensen, Jaclyn M; Patel, Pankaj C; Raver, Jana L
2014-03-01
Given increased interest in whether targets' behaviors at work are related to their victimization, we investigated employees' job performance level as a precipitating factor for being victimized by peers in one's work group. Drawing on rational choice theory and the victim precipitation model, we argue that perpetrators take into consideration the risks of aggressing against particular targets, such that high performers tend to experience covert forms of victimization from peers, whereas low performers tend to experience overt forms of victimization. We further contend that the motivation to punish performance deviants will be higher when performance differentials are salient, such that the effects of job performance on covert and overt victimization will be exacerbated by group performance polarization, yet mitigated when the target has high equity sensitivity (benevolence). Finally, we investigate whether victimization is associated with future performance impairments. Results from data collected at 3 time points from 576 individuals in 62 work groups largely support the proposed model. The findings suggest that job performance is a precipitating factor to covert victimization for high performers and overt victimization for low performers in the workplace with implications for subsequent performance.
Real-time remote scientific model validation
NASA Technical Reports Server (NTRS)
Frainier, Richard; Groleau, Nicolas
1994-01-01
This paper describes flight results from the use of a CLIPS-based validation facility to compare analyzed data from a space life sciences (SLS) experiment to an investigator's preflight model. The comparison, performed in real-time, either confirms or refutes the model and its predictions. This result then becomes the basis for continuing or modifying the investigator's experiment protocol. Typically, neither the astronaut crew in Spacelab nor the ground-based investigator team are able to react to their experiment data in real time. This facility, part of a larger science advisor system called Principal Investigator in a Box, was flown on the space shuttle in October, 1993. The software system aided the conduct of a human vestibular physiology experiment and was able to outperform humans in the tasks of data integrity assurance, data analysis, and scientific model validation. Of twelve preflight hypotheses associated with investigator's model, seven were confirmed and five were rejected or compromised.
NASA Technical Reports Server (NTRS)
White, R. J.
1973-01-01
A detailed description of Guyton's model and modifications are provided. Also included are descriptions of several typical experiments which the model can simulate to illustrate the model's general utility. A discussion of the problems associated with the interfacing of the model to other models such as respiratory and thermal regulation models which is prime importance since these stimuli are not present in the current model is also included. A user's guide for the operation of the model on the Xerox Sigma 3 computer is provided and two programs are described. A verification plan and procedure for performing experiments is also presented.
NASA Astrophysics Data System (ADS)
Javernick, Luke; Redolfi, Marco; Bertoldi, Walter
2018-05-01
New data collection techniques offer numerical modelers the ability to gather and utilize high quality data sets with high spatial and temporal resolution. Such data sets are currently needed for calibration, verification, and to fuel future model development, particularly morphological simulations. This study explores the use of high quality spatial and temporal data sets of observed bed load transport in braided river flume experiments to evaluate the ability of a two-dimensional model, Delft3D, to predict bed load transport. This study uses a fixed bed model configuration and examines the model's shear stress calculations, which are the foundation to predict the sediment fluxes necessary for morphological simulations. The evaluation is conducted for three flow rates, and model setup used highly accurate Structure-from-Motion (SfM) topography and discharge boundary conditions. The model was hydraulically calibrated using bed roughness, and performance was evaluated based on depth and inundation agreement. Model bed load performance was evaluated in terms of critical shear stress exceedance area compared to maps of observed bed mobility in a flume. Following the standard hydraulic calibration, bed load performance was tested for sensitivity to horizontal eddy viscosity parameterization and bed morphology updating. Simulations produced depth errors equal to the SfM inherent errors, inundation agreement of 77-85%, and critical shear stress exceedance in agreement with 49-68% of the observed active area. This study provides insight into the ability of physically based, two-dimensional simulations to accurately predict bed load as well as the effects of horizontal eddy viscosity and bed updating. Further, this study highlights how using high spatial and temporal data to capture the physical processes at work during flume experiments can help to improve morphological modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.
2009-08-07
This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
ERIC Educational Resources Information Center
Noordzij, Matthijs L.; Zuidhoek, Sander; Postma, Albert
2006-01-01
The purpose of the present study is twofold: the first objective is to evaluate the importance of visual experience for the ability to form a spatial representation (spatial mental model) of fairly elaborate spatial descriptions. Secondly, we examine whether blind people exhibit the same preferences (i.e. level of performance on spatial tasks) as…
On the identifiability of the Hill-1948 model with one uniaxial tensile test
NASA Astrophysics Data System (ADS)
Bertin, Morgan; Hild, François; Roux, Stéphane
2017-06-01
A uniaxial experiment is performed on an ultra-thin specimen made of 17-7 precipitation hardened stainless steel. An anti-wrinkling setup allows for the characterization of the mechanical behavior with Integrated Digital Image Correlation (IDIC). The result shows that a single uniaxial experiment investigated via IDIC possesses enough data (and even more) to characterize a complete anisotropic elastoplastic model.
Divergence of gastropod life history in contrasting thermal environments in a geothermal lake.
Johansson, M P; Ermold, F; Kristjánsson, B K; Laurila, A
2016-10-01
Experiments using natural populations have provided mixed support for thermal adaptation models, probably because the conditions are often confounded with additional environmental factors like seasonality. The contrasting geothermal environments within Lake Mývatn, northern Iceland, provide a unique opportunity to evaluate thermal adaptation models using closely located natural populations. We conducted laboratory common garden and field reciprocal transplant experiments to investigate how thermal origin influences the life history of Radix balthica snails originating from stable cold (6 °C), stable warm (23 °C) thermal environments or from areas with seasonal temperature variation. Supporting thermal optimality models, warm-origin snails survived poorly at 6 °C in the common garden experiment and better than cold-origin and seasonal-origin snails in the warm habitat in the reciprocal transplant experiment. Contrary to thermal adaptation models, growth rate in both experiments was highest in the warm populations irrespective of temperature, indicating cogradient variation. The optimal temperatures for growth and reproduction were similar irrespective of origin, but cold-origin snails always had the lowest performance, and seasonal-origin snails often performed at an intermediate level compared to snails originating in either stable environment. Our results indicate that central life-history traits can differ in their mode of evolution, with survival following the predictions of thermal optimality models, whereas ecological constraints have shaped the evolution of growth rates in local populations. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.
Performance testing of a vertical Bridgman furnace using experiments and numerical modeling
NASA Astrophysics Data System (ADS)
Rosch, W. R.; Fripp, A. L.; Debnam, W. J.; Pendergrass, T. K.
1997-04-01
This paper details a portion of the work performed in preparation for the growth of lead tin telluride crystals during a Space Shuttle flight. A coordinated effort of experimental measurements and numerical modeling was completed to determine the optimum growth parameters and the performance of the furnace. This work was done using NASA's Advanced Automated Directional Solidification Furnace, but the procedures used should be equally valid for other vertical Bridgman furnaces.
Representing ductile damage with the dual domain material point method
Long, C. C.; Zhang, D. Z.; Bronkhorst, C. A.; ...
2015-12-14
In this study, we incorporate a ductile damage material model into a computational framework based on the Dual Domain Material Point (DDMP) method. As an example, simulations of a flyer plate experiment involving ductile void growth and material failure are performed. The results are compared with experiments performed on high purity tantalum. We also compare the numerical results obtained from the DDMP method with those obtained from the traditional Material Point Method (MPM). Effects of an overstress model, artificial viscosity, and physical viscosity are investigated. Our results show that a physical bulk viscosity and overstress model are important in thismore » impact and failure problem, while physical shear viscosity and artificial shock viscosity have negligible effects. A simple numerical procedure with guaranteed convergence is introduced to solve for the equilibrium plastic state from the ductile damage model.« less
Lo, Julia C; Pluyter, Kari R; Meijer, Sebastiaan A
2016-02-01
The aim of this study was to examine individual markers of resilience and obtain quantitative insights into the understanding and the implications of variation and expertise levels in train traffic operators' goals and strategic mental models and their impact on performance. The Dutch railways are one of the world's most heavy utilized railway networks and have been identified to be weak in system and organizational resilience. Twenty-two train traffic controllers enacted two scenarios in a human-in-the-loop simulator. Their experience, goals, strategic mental models, and performance were assessed through questionnaires and simulator logs. Goals were operationalized through performance indicators and strategic mental models through train completion strategies. A variation was found between operators for both self-reported primary performance indicators and completion strategies. Further, the primary goal of only 14% of the operators reflected the primary organizational goal (i.e., arrival punctuality). An incongruence was also found between train traffic controllers' self-reported performance indicators and objective performance in a more disrupted condition. The level of experience tends to affect performance differently. There is a gap between primary organizational goals and preferred individual goals. Further, the relative strong diversity in primary operator goals and strategic mental models indicates weak resilience at the individual level. With recent and upcoming large-scale changes throughout the sociotechnical space of the railway infrastructure organization, the findings are useful to facilitate future railway traffic control and the development of a resilient system. © 2015, Human Factors and Ergonomics Society.
Probabilistic neural networks modeling of the 48-h LC50 acute toxicity endpoint to Daphnia magna.
Niculescu, S P; Lewis, M A; Tigner, J
2008-01-01
Two modeling experiments based on the maximum likelihood estimation paradigm and targeting prediction of the Daphnia magna 48-h LC50 acute toxicity endpoint for both organic and inorganic compounds are reported. The resulting models computational algorithms are implemented as basic probabilistic neural networks with Gaussian kernel (statistical corrections included). The first experiment uses strictly D. magna information for 971 structures as training/learning data and the resulting model targets practical applications. The second experiment uses the same training/learning information plus additional data on another 29 compounds whose endpoint information is originating from D. pulex and Ceriodaphnia dubia. It only targets investigation of the effect of mixing strictly D. magna 48-h LC50 modeling information with small amounts of similar information estimated from related species, and this is done as part of the validation process. A complementary 81 compounds dataset (involving only strictly D. magna information) is used to perform external testing. On this external test set, the Gaussian character of the distribution of the residuals is confirmed for both models. This allows the use of traditional statistical methodology to implement computation of confidence intervals for the unknown measured values based on the models predictions. Examples are provided for the model targeting practical applications. For the same model, a comparison with other existing models targeting the same endpoint is performed.
Performance Assessment in the PILOT Experiment On Board Space Stations Mir and ISS.
Johannes, Bernd; Salnitski, Vyacheslav; Dudukin, Alexander; Shevchenko, Lev; Bronnikov, Sergey
2016-06-01
The aim of this investigation into the performance and reliability of Russian cosmonauts in hand-controlled docking of a spacecraft on a space station (experiment PILOT) was to enhance overall mission safety and crew training efficiency. The preliminary findings on the Mir space station suggested that a break in docking training of about 90 d significantly degraded performance. Intensified experiment schedules on the International Space Station (ISS) have allowed for a monthly experiment using an on-board simulator. Therefore, instead of just three training tasks as on Mir, five training flights per session have been implemented on the ISS. This experiment was run in parallel but independently of the operational docking training the cosmonauts receive. First, performance was compared between the experiments on the two space stations by nonparametric testing. Performance differed significantly between space stations preflight, in flight, and postflight. Second, performance was analyzed by modeling the linear mixed effects of all variances (LME). The fixed factors space station, mission phases, training task numbers, and their interaction were analyzed. Cosmonauts were designated as a random factor. All fixed factors were found to be significant and the interaction between stations and mission phase was also significant. In summary, performance on the ISS was shown to be significantly improved, thus enhancing mission safety. Additional approaches to docking performance assessment and prognosis are presented and discussed.
Mikulincer, M
1986-12-01
Following the learned helplessness paradigm, I assessed in this study the effects of global and specific attributions for failure on the generalization of performance deficits in a dissimilar situation. Helplessness training consisted of experience with noncontingent failures on four cognitive discrimination problems attributed to either global or specific causes. Experiment 1 found that performance in a dissimilar situation was impaired following exposure to globally attributed failure. Experiment 2 examined the behavioral effects of the interaction between stable and global attributions of failure. Exposure to unsolvable problems resulted in reduced performance in a dissimilar situation only when failure was attributed to global and stable causes. Finally, Experiment 3 found that learned helplessness deficits were a product of the interaction of global and internal attribution. Performance deficits following unsolvable problems were recorded when failure was attributed to global and internal causes. Results were discussed in terms of the reformulated learned helplessness model.
Real-Time Performance Feedback for the Manual Control of Spacecraft
NASA Astrophysics Data System (ADS)
Karasinski, John Austin
Real-time performance metrics were developed to quantify workload, situational awareness, and manual task performance for use as visual feedback to pilots of aerospace vehicles. Results from prior lunar lander experiments with variable levels of automation were replicated and extended to provide insights for the development of real-time metrics. Increased levels of automation resulted in increased flight performance, lower workload, and increased situational awareness. Automated Speech Recognition (ASR) was employed to detect verbal callouts as a limited measure of subjects' situational awareness. A one-dimensional manual tracking task and simple instructor-model visual feedback scheme was developed. This feedback was indicated to the operator by changing the color of a guidance element on the primary flight display, similar to how a flight instructor points out elements of a display to a student pilot. Experiments showed that for this low-complexity task, visual feedback did not change subject performance, but did increase the subjects' measured workload. Insights gained from these experiments were applied to a Simplified Aid for EVA Rescue (SAFER) inspection task. The effects of variations of an instructor-model performance-feedback strategy on human performance in a novel SAFER inspection task were investigated. Real-time feedback was found to have a statistically significant effect of improving subject performance and decreasing workload in this complicated four degree of freedom manual control task with two secondary tasks.
Reduced Gravity Studies of Soret Transport Effects in Liquid Fuel Combustion
NASA Technical Reports Server (NTRS)
Shaw, Benjamin D.
2004-01-01
Soret transport, which is mass transport driven by thermal gradients, can be important in practical flames as well as laboratory flames by influencing transport of low molecular weight species (e.g., monatomic and diatomic hydrogen). In addition, gas-phase Soret transport of high molecular weight fuel species that are present in practical liquid fuels (e.g., octane or methanol) can be significant in practical flames (Rosner et al., 2000; Dakhlia et al., 2002) and in high pressure droplet evaporation (Curtis and Farrell, 1992), and it has also been shown that Soret transport effects can be important in determining oxygen diffusion rates in certain classes of microgravity droplet combustion experiments (Aharon and Shaw, 1998). It is thus useful to obtain information on flames under conditions where Soret effects can be clearly observed. This research is concerned with investigating effects of Soret transport on combustion of liquid fuels, in particular liquid fuel droplets. Reduced-gravity is employed to provide an ideal (spherically-symmetrical) experimental model with which to investigate effects of Soret transport on combustion. The research will involve performing reduced-gravity experiments on combustion of liquid fuel droplets in environments where Soret effects significantly influence transport of fuel and oxygen to flame zones. Experiments will also be performed where Soret effects are not expected to be important. Droplets initially in the 0.5 to 1 mm size range will be burned. Data will be obtained on influences of Soret transport on combustion characteristics (e.g., droplet burning rates, droplet lifetimes, gas-phase extinction, and transient flame behaviors) under simplified geometrical conditions that are most amenable to theoretical modeling (i.e., spherical symmetry). The experiments will be compared with existing theoretical models as well as new models that will be developed. Normal gravity experiments will also be performed.
NASA Astrophysics Data System (ADS)
Tosi, Daniele; Saccomandi, Paola; Schena, Emiliano; Duraibabu, Dinesh B.; Poeggel, Sven; Adilzhan, Abzal; Aliakhmet, Kamilla; Silvestri, Sergio; Leen, Gabriel; Lewis, Elfed
2016-05-01
Optical fibre sensors have been applied to perform biophysical measurement in ex-vivo laser ablation (LA), on pancreas animal phantom. Experiments have been performed using Fibre Bragg Grating (FBG) arrays for spatially resolved temperature detection, and an all-glass Extrinsic Fabry-Perot Interferometer (EFPI) for pressure measurement. Results using a Nd:YAG laser source as ablation device, are presented and discussed.
Habilitation thesis on STT and Higgs searches in WH production (in FRENCH)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonnenschein, Lars
The detector of the D0 experiment at the proton anti-proton collider Tevatron in Run II is discussed in detail. The performance of the collider and the experiment is presented. Standard model Higgs searches with integrated luminosities between 260 pb -1 and 950 pb -1 and their combination are performed. No deviation from SM background expectation has been observed. Sensitivity prospects at the Tevatron are shown.
McCauley, Peter; Kalachev, Leonid V.; Mollicone, Daniel J.; Banks, Siobhan; Dinges, David F.; Van Dongen, Hans P. A.
2013-01-01
Recent experimental observations and theoretical advances have indicated that the homeostatic equilibrium for sleep/wake regulation—and thereby sensitivity to neurobehavioral impairment from sleep loss—is modulated by prior sleep/wake history. This phenomenon was predicted by a biomathematical model developed to explain changes in neurobehavioral performance across days in laboratory studies of total sleep deprivation and sustained sleep restriction. The present paper focuses on the dynamics of neurobehavioral performance within days in this biomathematical model of fatigue. Without increasing the number of model parameters, the model was updated by incorporating time-dependence in the amplitude of the circadian modulation of performance. The updated model was calibrated using a large dataset from three laboratory experiments on psychomotor vigilance test (PVT) performance, under conditions of sleep loss and circadian misalignment; and validated using another large dataset from three different laboratory experiments. The time-dependence of circadian amplitude resulted in improved goodness-of-fit in night shift schedules, nap sleep scenarios, and recovery from prior sleep loss. The updated model predicts that the homeostatic equilibrium for sleep/wake regulation—and thus sensitivity to sleep loss—depends not only on the duration but also on the circadian timing of prior sleep. This novel theoretical insight has important implications for predicting operator alertness during work schedules involving circadian misalignment such as night shift work. Citation: McCauley P; Kalachev LV; Mollicone DJ; Banks S; Dinges DF; Van Dongen HPA. Dynamic circadian modulation in a biomathematical model for the effects of sleep and sleep loss on waking neurobehavioral performance. SLEEP 2013;36(12):1987-1997. PMID:24293775
Models and Measurements Intercomparison 2
NASA Technical Reports Server (NTRS)
Park, Jae H. (Editor); Ko, Malcolm K. W. (Editor); Jackman, Charles H. (Editor); Plumb, R. Alan (Editor); Kaye, Jack A. (Editor); Sage, Karen H. (Editor)
1999-01-01
Models and Measurement Intercomparison II (MM II) summarizes the intercomparison of results from model simulations and observations of stratospheric species. Representatives from twenty-three modeling groups using twenty-nine models participated in these MM II exercises between 1996 and 1999. Twelve of the models were two- dimensional zonal-mean models while seventeen were three-dimensional models. This was an international effort as seven were from outside the United States. Six transport experiments and five chemistry experiments were designed for various models. Models participating in the transport experiments performed simulations of chemically inert tracers providing diagnostics for transport. The chemistry experiments involved simulating the distributions of chemically active trace cases including ozone. The model run conditions for dynamics and chemistry were prescribed in order to minimize the factors that caused differences in the models. The report includes a critical review of the results by the participants and a discussion of the causes of differences between modeled and measured results as well as between results from different models, A sizable effort went into preparation of the database of the observations. This included a new climatology for ozone. The report should help in evaluating the results from various predictive models for assessing humankind perturbations of the stratosphere.
Performance related issues in distributed database systems
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.
from Colorado School of Mines. His research interests include optical modeling, computational fluid dynamics, and heat transfer. His work involves optical performance modeling of concentrating solar power experience includes developing thermal and optical models of CSP components at Norwich Solar Technologies
Human Resource Scheduling in Performing a Sequence of Discrete Responses
2009-02-28
each is a graph comparing simulated results of each respective model with data from Experiment 3b. As described below the parameters of the model...initiated in parallel with ongoing Central operations on another. To fix model parameters we estimated the range of times to perform the sum of the...standard deviation for each parameter was set to 50% of mean value. Initial simulations found no meaningful differences between setting the standard
Felipe-Sesé, Luis; López-Alba, Elías; Hannemann, Benedikt; Schmeer, Sebastian; Diaz, Francisco A
2017-06-28
A quasistatic indentation numerical analysis in a round section specimen made of soft material has been performed and validated with a full field experimental technique, i.e., Digital Image Correlation 3D. The contact experiment specifically consisted of loading a 25 mm diameter rubber cylinder of up to a 5 mm indentation and then unloading. Experimental strains fields measured at the surface of the specimen during the experiment were compared with those obtained by performing two numerical analyses employing two different hyperplastic material models. The comparison was performed using an Image Decomposition new methodology that makes a direct comparison of full-field data independently of their scale or orientation possible. Numerical results show a good level of agreement with those measured during the experiments. However, since image decomposition allows for the differences to be quantified, it was observed that one of the adopted material models reproduces lower differences compared to experimental results.
Felipe-Sesé, Luis; López-Alba, Elías; Hannemann, Benedikt; Schmeer, Sebastian; Diaz, Francisco A.
2017-01-01
A quasistatic indentation numerical analysis in a round section specimen made of soft material has been performed and validated with a full field experimental technique, i.e., Digital Image Correlation 3D. The contact experiment specifically consisted of loading a 25 mm diameter rubber cylinder of up to a 5 mm indentation and then unloading. Experimental strains fields measured at the surface of the specimen during the experiment were compared with those obtained by performing two numerical analyses employing two different hyperplastic material models. The comparison was performed using an Image Decomposition new methodology that makes a direct comparison of full-field data independently of their scale or orientation possible. Numerical results show a good level of agreement with those measured during the experiments. However, since image decomposition allows for the differences to be quantified, it was observed that one of the adopted material models reproduces lower differences compared to experimental results. PMID:28773081
NASA Technical Reports Server (NTRS)
Allan, Brian G.; Owens, Lewis R.; Lin, John C.
2006-01-01
This research will investigate the use of Design-of-Experiments (DOE) in the development of an optimal passive flow control vane design for a boundary-layer-ingesting (BLI) offset inlet in transonic flow. This inlet flow control is designed to minimize the engine fan-face distortion levels and first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. Numerical simulations of the BLI inlet are computed using the Reynolds-averaged Navier-Stokes (RANS) flow solver, OVERFLOW, developed at NASA. These simulations are used to generate the numerical experiments for the DOE response surface model. In this investigation, two DOE optimizations were performed using a D-Optimal Response Surface model. The first DOE optimization was performed using four design factors which were vane height and angles-of-attack for two groups of vanes. One group of vanes was placed at the bottom of the inlet and a second group symmetrically on the sides. The DOE design was performed for a BLI inlet with a free-stream Mach number of 0.85 and a Reynolds number of 2 million, based on the length of the fan-face diameter, matching an experimental wind tunnel BLI inlet test. The first DOE optimization required a fifth order model having 173 numerical simulation experiments and was able to reduce the DC60 baseline distortion from 64% down to 4.4%, while holding the pressure recovery constant. A second DOE optimization was performed holding the vanes heights at a constant value from the first DOE optimization with the two vane angles-of-attack as design factors. This DOE only required a second order model fit with 15 numerical simulation experiments and reduced DC60 to 3.5% with small decreases in the fourth and fifth harmonic amplitudes. The second optimal vane design was tested at the NASA Langley 0.3- Meter Transonic Cryogenic Tunnel in a BLI inlet experiment. The experimental results showed a 80% reduction of DPCP(sub avg), the circumferential distortion level at the engine fan-face.
NASA Technical Reports Server (NTRS)
Allan, Brian G.; Owens, Lewis R., Jr.; Lin, John C.
2006-01-01
This research will investigate the use of Design-of-Experiments (DOE) in the development of an optimal passive flow control vane design for a boundary-layer-ingesting (BLI) offset inlet in transonic flow. This inlet flow control is designed to minimize the engine fan face distortion levels and first five Fourier harmonic half amplitudes while maximizing the inlet pressure recovery. Numerical simulations of the BLI inlet are computed using the Reynolds-averaged Navier-Stokes (RANS) flow solver, OVERFLOW, developed at NASA. These simulations are used to generate the numerical experiments for the DOE response surface model. In this investigation, two DOE optimizations were performed using a D-Optimal Response Surface model. The first DOE optimization was performed using four design factors which were vane height and angles-of-attack for two groups of vanes. One group of vanes was placed at the bottom of the inlet and a second group symmetrically on the sides. The DOE design was performed for a BLI inlet with a free-stream Mach number of 0.85 and a Reynolds number of 2 million, based on the length of the fan face diameter, matching an experimental wind tunnel BLI inlet test. The first DOE optimization required a fifth order model having 173 numerical simulation experiments and was able to reduce the DC60 baseline distortion from 64% down to 4.4%, while holding the pressure recovery constant. A second DOE optimization was performed holding the vanes heights at a constant value from the first DOE optimization with the two vane angles-of-attack as design factors. This DOE only required a second order model fit with 15 numerical simulation experiments and reduced DC60 to 3.5% with small decreases in the fourth and fifth harmonic amplitudes. The second optimal vane design was tested at the NASA Langley 0.3-Meter Transonic Cryogenic Tunnel in a BLI inlet experiment. The experimental results showed a 80% reduction of DPCPavg, the circumferential distortion level at the engine fan face.
Retardation of mobile radionuclides in granitic rock fractures by matrix diffusion
NASA Astrophysics Data System (ADS)
Hölttä, P.; Poteri, A.; Siitari-Kauppi, M.; Huittinen, N.
Transport of iodide and sodium has been studied by means of block fracture and core column experiments to evaluate the simplified radionuclide transport concept. The objectives were to examine the processes causing retention in solute transport, especially matrix diffusion, and to estimate their importance during transport in different scales and flow conditions. Block experiments were performed using a Kuru Grey granite block having a horizontally planar natural fracture. Core columns were constructed from cores drilled orthogonal to the fracture of the granite block. Several tracer tests were performed using uranine, 131I and 22Na as tracers at water flow rates 0.7-50 μL min -1. Transport of tracers was modelled by applying the advection-dispersion model based on the generalized Taylor dispersion added with matrix diffusion. Scoping calculations were combined with experiments to test the model concepts. Two different experimental configurations could be modelled applying consistent transport processes and parameters. The processes, advection-dispersion and matrix diffusion, were conceptualized with sufficient accuracy to replicate the experimental results. The effects of matrix diffusion were demonstrated on the slightly sorbing sodium and mobile iodine breakthrough curves.
NASA Technical Reports Server (NTRS)
Sakuraba, K.; Tsuruda, Y.; Hanada, T.; Liou, J.-C.; Akahoshi, Y.
2007-01-01
This paper summarizes two new satellite impact tests conducted in order to investigate on the outcome of low- and hyper-velocity impacts on two identical target satellites. The first experiment was performed at a low velocity of 1.5 km/s using a 40-gram aluminum alloy sphere, whereas the second experiment was performed at a hyper-velocity of 4.4 km/s using a 4-gram aluminum alloy sphere by two-stage light gas gun in Kyushu Institute of Technology. To date, approximately 1,500 fragments from each impact test have been collected for detailed analysis. Each piece was analyzed based on the method used in the NASA Standard Breakup Model 2000 revision. The detailed analysis will conclude: 1) the similarity in mass distribution of fragments between low and hyper-velocity impacts encourages the development of a general-purpose distribution model applicable for a wide impact velocity range, and 2) the difference in area-to-mass ratio distribution between the impact experiments and the NASA standard breakup model suggests to describe the area-to-mass ratio by a bi-normal distribution.
Control of large flexible structures - An experiment on the NASA Mini-Mast facility
NASA Technical Reports Server (NTRS)
Hsieh, Chen; Kim, Jae H.; Liu, Ketao; Zhu, Guoming; Skelton, Robert E.
1991-01-01
The output variance constraint controller design procedure is integrated with model reduction by modal cost analysis. A procedure is given for tuning MIMO controller designs to find the maximal rms performance of the actual system. Controller designs based on a finite-element model of the system are compared with controller designs based on an identified model (obtained using the Q-Markov Cover algorithm). The identified model and the finite-element model led to similar closed-loop performance, when tested in the Mini-Mast facility at NASA Langley.
NASA Astrophysics Data System (ADS)
Menicucci, D. F.
1986-01-01
The performance of a photovoltaic (PV) system is affected by its mounting configuration. The optimal configuration is unclear because of lack of experience and data. Sandia National Laboratories, Albuquerque (SNLA), has conducted a controlled field experiment to compare four types of the most common module mounting. The data from the experiment were used to verify the accuracy of PVFORM, a new computer program that simulates PV performance. PVFORM was then used to simulate the performance of identical PV modules on different mounting configurations at 10 sites throughout the US. This report describes the module mounting configurations, the experimental methods used, the specialized statistical techniques used in the analysis, and the final results of the effort. The module mounting configurations are rank ordered at each site according to their annual and seasonal energy production performance, and each is briefly discussed in terms of its advantages and disadvantages in various applications.
NASA Technical Reports Server (NTRS)
Clare, L. P.; Yan, T.-Y.
1985-01-01
The analysis of the ALOHA random access protocol for communications channels with fading is presented. The protocol is modified to send multiple contiguous copies of a message at each transmission attempt. Both pure and slotted ALOHA channels are considered. A general two state model is used for the channel error process to account for the channel fading memory. It is shown that greater throughput and smaller delay may be achieved using repetitions. The model is applied to the analysis of the delay-throughput performance in a fading mobile communications environment. Numerical results are given for NASA's Mobile Satellite Experiment.
Physician Utilization of a Hospital Information System: A Computer Simulation Model
Anderson, James G.; Jay, Stephen J.; Clevenger, Stephen J.; Kassing, David R.; Perry, Jane; Anderson, Marilyn M.
1988-01-01
The purpose of this research was to develop a computer simulation model that represents the process through which physicians enter orders into a hospital information system (HIS). Computer simulation experiments were performed to estimate the effects of two methods of order entry on outcome variables. The results of the computer simulation experiments were used to perform a cost-benefit analysis to compare the two different means of entering medical orders into the HIS. The results indicate that the use of personal order sets to enter orders into the HIS will result in a significant reduction in manpower, salaries and fringe benefits, and errors in order entry.
NASA Astrophysics Data System (ADS)
Butchart, Neal; Anstey, James A.; Hamilton, Kevin; Osprey, Scott; McLandress, Charles; Bushell, Andrew C.; Kawatani, Yoshio; Kim, Young-Ha; Lott, Francois; Scinocca, John; Stockdale, Timothy N.; Andrews, Martin; Bellprat, Omar; Braesicke, Peter; Cagnazzo, Chiara; Chen, Chih-Chieh; Chun, Hye-Yeong; Dobrynin, Mikhail; Garcia, Rolando R.; Garcia-Serrano, Javier; Gray, Lesley J.; Holt, Laura; Kerzenmacher, Tobias; Naoe, Hiroaki; Pohlmann, Holger; Richter, Jadwiga H.; Scaife, Adam A.; Schenzinger, Verena; Serva, Federico; Versick, Stefan; Watanabe, Shingo; Yoshida, Kohei; Yukimoto, Seiji
2018-03-01
The Stratosphere-troposphere Processes And their Role in Climate (SPARC) Quasi-Biennial Oscillation initiative (QBOi) aims to improve the fidelity of tropical stratospheric variability in general circulation and Earth system models by conducting coordinated numerical experiments and analysis. In the equatorial stratosphere, the QBO is the most conspicuous mode of variability. Five coordinated experiments have therefore been designed to (i) evaluate and compare the verisimilitude of modelled QBOs under present-day conditions, (ii) identify robustness (or alternatively the spread and uncertainty) in the simulated QBO response to commonly imposed changes in model climate forcings (e.g. a doubling of CO2 amounts), and (iii) examine model dependence of QBO predictability. This paper documents these experiments and the recommended output diagnostics. The rationale behind the experimental design and choice of diagnostics is presented. To facilitate scientific interpretation of the results in other planned QBOi studies, consistent descriptions of the models performing each experiment set are given, with those aspects particularly relevant for simulating the QBO tabulated for easy comparison.
Spain, Seth M; Miner, Andrew G; Kroonenberg, Pieter M; Drasgow, Fritz
2010-08-06
Questions about the dynamic processes that drive behavior at work have been the focus of increasing attention in recent years. Models describing behavior at work and research on momentary behavior indicate that substantial variation exists within individuals. This article examines the rationale behind this body of work and explores a method of analyzing momentary work behavior using experience sampling methods. The article also examines a previously unused set of methods for analyzing data produced by experience sampling. These methods are known collectively as multiway component analysis. Two archetypal techniques of multimode factor analysis, the Parallel factor analysis and the Tucker3 models, are used to analyze data from Miner, Glomb, and Hulin's (2010) experience sampling study of work behavior. The efficacy of these techniques for analyzing experience sampling data is discussed as are the substantive multimode component models obtained.
NASA Astrophysics Data System (ADS)
Murphy, T. J.; Douglas, M. R.; Cardenas, T.; Cooley, J. H.; Gunderson, M. A.; Haines, B. M.; Hamilton, C. E.; Kim, Y.; Lee, M. N.; Oertel, J. A.; Olson, R. E.; Randolph, R. B.; Shah, R. C.; Smidt, J. M.
2017-10-01
The MARBLE campaign on NIF investigates the effect of heterogeneous mix on thermonuclear burn for comparison to a probability distribution function (PDF) burn model. MARBLE utilizes plastic capsules filled with deuterated plastic foam and tritium gas. The ratio of DT to DD neutron yield is indicative of the degree to which the foam and the gas atomically mix. Platform development experiments have been performed to understand the behavior of the foam and of the gas separately using two types of capsule. The first experiments using deuterated foam and tritium gas have been performed. Results of these experiments, and the implications for our understanding of thermonuclear burn in heterogeneously mixed separated reactant experiments will be discussed. This work is supported by US DOE/NNSA, performed at LANL, operated by LANS LLC under contract DE-AC52-06NA25396.
Kravitz, Benjamin S.; Robock, Alan; Tilmes, S.; ...
2015-10-27
We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more long wave radiation to escape to space. We discuss experiment designs, as well as the rationale formore » those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. In conclusion, this is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.« less
Modeling Ullage Dynamics of Tank Pressure Control Experiment during Jet Mixing in Microgravity
NASA Technical Reports Server (NTRS)
Kartuzova, O.; Kassemi, M.
2016-01-01
A CFD model for simulating the fluid dynamics of the jet induced mixing process is utilized in this paper to model the pressure control portion of the Tank Pressure Control Experiment (TPCE) in microgravity1. The Volume of Fluid (VOF) method is used for modeling the dynamics of the interface during mixing. The simulations were performed at a range of jet Weber numbers from non-penetrating to fully penetrating. Two different initial ullage positions were considered. The computational results for the jet-ullage interaction are compared with still images from the video of the experiment. A qualitative comparison shows that the CFD model was able to capture the main features of the interfacial dynamics, as well as the jet penetration of the ullage.
ERIC Educational Resources Information Center
Engdahl, Eric
2012-01-01
This article highlights the East Bay Center for the Performing Arts in Richmond, California, which is one successful model of a community-based arts education organization whose central mission is to provide these deep art-rich experiences for students from low socio-economic status (SES) communities, who in this instance are predominately African…
CRBR pump water test experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, M.E.; Huber, K.A.
1983-01-01
The hydraulic design features and water testing of the hydraulic scale model and prototype pump of the sodium pumps used in the primary and intermediate sodium loops of the Clinch River Breeder Reactor Plant (CRBRP) are described. The Hydraulic Scale Model tests are performed and the results of these tests are discussed. The Prototype Pump tests are performed and the results of these tests are discussed.
ERIC Educational Resources Information Center
Ohio State Dept. of Education, Columbus. Div. of Special Education.
This report describes three model demonstration projects in Ohio school districts which focused on strategies for identifying students gifted in visual and performing arts and delivering hands-on arts education and appreciation experiences. Presented for each program is information on: identifying characteristics (district, location, school…
Model Selection in Systems Biology Depends on Experimental Design
Silk, Daniel; Kirk, Paul D. W.; Barnes, Chris P.; Toni, Tina; Stumpf, Michael P. H.
2014-01-01
Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis. PMID:24922483
Model selection in systems biology depends on experimental design.
Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H
2014-06-01
Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.
Verschuur, Carl
2009-03-01
Difficulties in speech recognition experienced by cochlear implant users may be attributed both to information loss caused by signal processing and to information loss associated with the interface between the electrode array and auditory nervous system, including cross-channel interaction. The objective of the work reported here was to attempt to partial out the relative contribution of these different factors to consonant recognition. This was achieved by comparing patterns of consonant feature recognition as a function of channel number and presence/absence of background noise in users of the Nucleus 24 device with normal hearing subjects listening to acoustic models that mimicked processing of that device. Additionally, in the acoustic model experiment, a simulation of cross-channel spread of excitation, or "channel interaction," was varied. Results showed that acoustic model experiments were highly correlated with patterns of performance in better-performing cochlear implant users. Deficits to consonant recognition in this subgroup could be attributed to cochlear implant processing, whereas channel interaction played a much smaller role in determining performance errors. The study also showed that large changes to channel number in the Advanced Combination Encoder signal processing strategy led to no substantial changes in performance.
Operational experience with VAWT blades. [structural performance
NASA Technical Reports Server (NTRS)
Sullivan, W. N.
1979-01-01
The structural performance of 17 meter diameter wind turbine rotors is discussed. Test results for typical steady and vibratory stress measurements are summarized along with predicted values of stress based on a quasi-static finite element model.
Objective assessment of operator performance during ultrasound-guided procedures.
Tabriz, David M; Street, Mandie; Pilgram, Thomas K; Duncan, James R
2011-09-01
Simulation permits objective assessment of operator performance in a controlled and safe environment. Image-guided procedures often require accurate needle placement, and we designed a system to monitor how ultrasound guidance is used to monitor needle advancement toward a target. The results were correlated with other estimates of operator skill. The simulator consisted of a tissue phantom, ultrasound unit, and electromagnetic tracking system. Operators were asked to guide a needle toward a visible point target. Performance was video-recorded and synchronized with the electromagnetic tracking data. A series of algorithms based on motor control theory and human information processing were used to convert raw tracking data into different performance indices. Scoring algorithms converted the tracking data into efficiency, quality, task difficulty, and targeting scores that were aggregated to create performance indices. After initial feasibility testing, a standardized assessment was developed. Operators (N = 12) with a broad spectrum of skill and experience were enrolled and tested. Overall scores were based on performance during ten simulated procedures. Prior clinical experience was used to independently estimate operator skill. When summed, the performance indices correlated well with estimated skill. Operators with minimal or no prior experience scored markedly lower than experienced operators. The overall score tended to increase according to operator's clinical experience. Operator experience was linked to decreased variation in multiple aspects of performance. The aggregated results of multiple trials provided the best correlation between estimated skill and performance. A metric for the operator's ability to maintain the needle aimed at the target discriminated between operators with different levels of experience. This study used a highly focused task model, standardized assessment, and objective data analysis to assess performance during simulated ultrasound-guided needle placement. The performance indices were closely related to operator experience.
Terminal velocity of a shuttlecock in vertical fall
NASA Astrophysics Data System (ADS)
Peastrel, Mark; Lynch, Rosemary; Armenti, Angelo
1980-07-01
We have performed a straightforward vertical fall experiment for a case where the effects of air resistance are important and directly measurable. Using a commonly available badminton shuttlecock, a tape measure, and a millisecond timer, the times required for the shuttlecock to fall given distances (up to almost ten meters) were accurately measured. The experiment was performed in an open stairwell. The experimental data was compared to the predictions of several models. The best fit was obtained with the model which assumes a resistive force quadratic in the instantaneous speed of the falling object. This model was fitted to the experimental data enabling us to predict the terminal velocity of the shuttlecock (6.80 m/sec). The results indicate that, starting from rest, the vertically falling shuttlecock achieves 99% of its terminal velocity in 1.84 sec, after falling 9.2 m. The relative ease in collecting the data, as well as the excellent agreement with theory, make this an ideal experiment for use in physics courses at a variety of levels.
Strategies to intervene on causal systems are adaptively selected.
Coenen, Anna; Rehder, Bob; Gureckis, Todd M
2015-06-01
How do people choose interventions to learn about causal systems? Here, we considered two possibilities. First, we test an information sampling model, information gain, which values interventions that can discriminate between a learner's hypotheses (i.e. possible causal structures). We compare this discriminatory model to a positive testing strategy that instead aims to confirm individual hypotheses. Experiment 1 shows that individual behavior is described best by a mixture of these two alternatives. In Experiment 2 we find that people are able to adaptively alter their behavior and adopt the discriminatory model more often after experiencing that the confirmatory strategy leads to a subjective performance decrement. In Experiment 3, time pressure leads to the opposite effect of inducing a change towards the simpler positive testing strategy. These findings suggest that there is no single strategy that describes how intervention decisions are made. Instead, people select strategies in an adaptive fashion that trades off their expected performance and cognitive effort. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Schubert, T. F.; Jacobitz, F. G.; Kim, E. M.
2009-01-01
In order to meet changing curricular needs, an electric motor and generator laboratory experience was designed, implemented, and assessed. The experiment is unusual in its early placement in the curriculum and in that it focuses on modeling electric motors, predicting their performance, and measuring efficiency of energy conversion. While…
Maritime Continent seasonal climate biases in AMIP experiments of the CMIP5 multimodel ensemble
NASA Astrophysics Data System (ADS)
Toh, Ying Ying; Turner, Andrew G.; Johnson, Stephanie J.; Holloway, Christopher E.
2018-02-01
The fidelity of 28 Coupled Model Intercomparison Project phase 5 (CMIP5) models in simulating mean climate over the Maritime Continent in the Atmospheric Model Intercomparison Project (AMIP) experiment is evaluated in this study. The performance of AMIP models varies greatly in reproducing seasonal mean climate and the seasonal cycle. The multi-model mean has better skill at reproducing the observed mean climate than the individual models. The spatial pattern of 850 hPa wind is better simulated than the precipitation in all four seasons. We found that model horizontal resolution is not a good indicator of model performance. Instead, a model's local Maritime Continent biases are somewhat related to its biases in the local Hadley circulation and global monsoon. The comparison with coupled models in CMIP5 shows that AMIP models generally performed better than coupled models in the simulation of the global monsoon and local Hadley circulation but less well at simulating the Maritime Continent annual cycle of precipitation. To characterize model systematic biases in the AMIP runs, we performed cluster analysis on Maritime Continent annual cycle precipitation. Our analysis resulted in two distinct clusters. Cluster I models are able to capture both the winter monsoon and summer monsoon shift, but they overestimate the precipitation; especially during the JJA and SON seasons. Cluster II models simulate weaker seasonal migration than observed, and the maximum rainfall position stays closer to the equator throughout the year. The tropics-wide properties of these clusters suggest a connection between the skill of simulating global properties of the monsoon circulation and the skill of simulating the regional scale of Maritime Continent precipitation.
NASA Astrophysics Data System (ADS)
Nengker, T.; Choudhary, A.; Dimri, A. P.
2018-04-01
The ability of an ensemble of five regional climate models (hereafter RCMs) under Coordinated Regional Climate Downscaling Experiments-South Asia (hereafter, CORDEX-SA) in simulating the key features of present day near surface mean air temperature (Tmean) climatology (1970-2005) over the Himalayan region is studied. The purpose of this paper is to understand the consistency in the performance of models across the ensemble, space and seasons. For this a number of statistical measures like trend, correlation, variance, probability distribution function etc. are applied to evaluate the performance of models against observation and simultaneously the underlying uncertainties between them for four different seasons. The most evident finding from the study is the presence of a large cold bias (-6 to -8 °C) which is systematically seen across all the models and across space and time over the Himalayan region. However, these RCMs with its fine resolution perform extremely well in capturing the spatial distribution of the temperature features as indicated by a consistently high spatial correlation (greater than 0.9) with the observation in all seasons. In spite of underestimation in simulated temperature and general intensification of cold bias with increasing elevation the models show a greater rate of warming than the observation throughout entire altitudinal stretch of study region. During winter, the simulated rate of warming gets even higher at high altitudes. Moreover, a seasonal response of model performance and its spatial variability to elevation is found.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
New Integrated Modeling Capabilities: MIDAS' Recent Behavioral Enhancements
NASA Technical Reports Server (NTRS)
Gore, Brian F.; Jarvis, Peter A.
2005-01-01
The Man-machine Integration Design and Analysis System (MIDAS) is an integrated human performance modeling software tool that is based on mechanisms that underlie and cause human behavior. A PC-Windows version of MIDAS has been created that integrates the anthropometric character "Jack (TM)" with MIDAS' validated perceptual and attention mechanisms. MIDAS now models multiple simulated humans engaging in goal-related behaviors. New capabilities include the ability to predict situations in which errors and/or performance decrements are likely due to a variety of factors including concurrent workload and performance influencing factors (PIFs). This paper describes a new model that predicts the effects of microgravity on a mission specialist's performance, and its first application to simulating the task of conducting a Life Sciences experiment in space according to a sequential or parallel schedule of performance.
Search performance is better predicted by tileability than presence of a unique basic feature.
Chang, Honghua; Rosenholtz, Ruth
2016-08-01
Traditional models of visual search such as feature integration theory (FIT; Treisman & Gelade, 1980), have suggested that a key factor determining task difficulty consists of whether or not the search target contains a "basic feature" not found in the other display items (distractors). Here we discriminate between such traditional models and our recent texture tiling model (TTM) of search (Rosenholtz, Huang, Raj, Balas, & Ilie, 2012b), by designing new experiments that directly pit these models against each other. Doing so is nontrivial, for two reasons. First, the visual representation in TTM is fully specified, and makes clear testable predictions, but its complexity makes getting intuitions difficult. Here we elucidate a rule of thumb for TTM, which enables us to easily design new and interesting search experiments. FIT, on the other hand, is somewhat ill-defined and hard to pin down. To get around this, rather than designing totally new search experiments, we start with five classic experiments that FIT already claims to explain: T among Ls, 2 among 5s, Q among Os, O among Qs, and an orientation/luminance-contrast conjunction search. We find that fairly subtle changes in these search tasks lead to significant changes in performance, in a direction predicted by TTM, providing definitive evidence in favor of the texture tiling model as opposed to traditional views of search.
Search performance is better predicted by tileability than presence of a unique basic feature
Chang, Honghua; Rosenholtz, Ruth
2016-01-01
Traditional models of visual search such as feature integration theory (FIT; Treisman & Gelade, 1980), have suggested that a key factor determining task difficulty consists of whether or not the search target contains a “basic feature” not found in the other display items (distractors). Here we discriminate between such traditional models and our recent texture tiling model (TTM) of search (Rosenholtz, Huang, Raj, Balas, & Ilie, 2012b), by designing new experiments that directly pit these models against each other. Doing so is nontrivial, for two reasons. First, the visual representation in TTM is fully specified, and makes clear testable predictions, but its complexity makes getting intuitions difficult. Here we elucidate a rule of thumb for TTM, which enables us to easily design new and interesting search experiments. FIT, on the other hand, is somewhat ill-defined and hard to pin down. To get around this, rather than designing totally new search experiments, we start with five classic experiments that FIT already claims to explain: T among Ls, 2 among 5s, Q among Os, O among Qs, and an orientation/luminance-contrast conjunction search. We find that fairly subtle changes in these search tasks lead to significant changes in performance, in a direction predicted by TTM, providing definitive evidence in favor of the texture tiling model as opposed to traditional views of search. PMID:27548090
NASA Astrophysics Data System (ADS)
Varady, Mark; Mantooth, Brent; Pearl, Thomas; Willis, Matthew
2014-03-01
A continuum model of reactive decontamination in absorbing polymeric thin film substrates exposed to the chemical warfare agent O-ethyl S-[2-(diisopropylamino)ethyl] methylphosphonothioate (known as VX) was developed to assess the performance of various decontaminants. Experiments were performed in conjunction with an inverse analysis method to obtain the necessary model parameters. The experiments involved contaminating a substrate with a fixed VX exposure, applying a decontaminant, followed by a time-resolved, liquid phase extraction of the absorbing substrate to measure the residual contaminant by chromatography. Decontamination model parameters were uniquely determined using the Levenberg-Marquardt nonlinear least squares fitting technique to best fit the experimental time evolution of extracted mass. The model was implemented numerically in both a 2D axisymmetric finite element program and a 1D finite difference code, and it was found that the more computationally efficient 1D implementation was sufficiently accurate. The resulting decontamination model provides an accurate quantification of contaminant concentration profile in the material, which is necessary to assess exposure hazards.
A comparison of optimal MIMO linear and nonlinear models for brain machine interfaces
NASA Astrophysics Data System (ADS)
Kim, S.-P.; Sanchez, J. C.; Rao, Y. N.; Erdogmus, D.; Carmena, J. M.; Lebedev, M. A.; Nicolelis, M. A. L.; Principe, J. C.
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
A comparison of optimal MIMO linear and nonlinear models for brain-machine interfaces.
Kim, S-P; Sanchez, J C; Rao, Y N; Erdogmus, D; Carmena, J M; Lebedev, M A; Nicolelis, M A L; Principe, J C
2006-06-01
The field of brain-machine interfaces requires the estimation of a mapping from spike trains collected in motor cortex areas to the hand kinematics of the behaving animal. This paper presents a systematic investigation of several linear (Wiener filter, LMS adaptive filters, gamma filter, subspace Wiener filters) and nonlinear models (time-delay neural network and local linear switching models) applied to datasets from two experiments in monkeys performing motor tasks (reaching for food and target hitting). Ensembles of 100-200 cortical neurons were simultaneously recorded in these experiments, and even larger neuronal samples are anticipated in the future. Due to the large size of the models (thousands of parameters), the major issue studied was the generalization performance. Every parameter of the models (not only the weights) was selected optimally using signal processing and machine learning techniques. The models were also compared statistically with respect to the Wiener filter as the baseline. Each of the optimization procedures produced improvements over that baseline for either one of the two datasets or both.
Aerogel Algorithm for Shrapnel Penetration Experiments
NASA Astrophysics Data System (ADS)
Tokheim, R. E.; Erlich, D. C.; Curran, D. R.; Tobin, M.; Eder, D.
2004-07-01
To aid in assessing shrapnel produced by laser-irradiated targets, we have performed shrapnel collection "BB gun" experiments in aerogel and have developed a simple analytical model for deceleration of the shrapnel particles in the aerogel. The model is similar in approach to that of Anderson and Ahrens (J. Geophys. Res., 99 El, 2063-2071, Jan. 1994) and accounts for drag, aerogel compaction heating, and the velocity threshold for shrapnel ablation due to conductive heating. Model predictions are correlated with the BB gun results at impact velocities up to a few hundred m/s and with NASA data for impact velocities up to 6 km/s. The model shows promising agreement with the data and will be used to plan and interpret future experiments.
NASA Astrophysics Data System (ADS)
Ghimire, S.; Choudhary, A.; Dimri, A. P.
2018-04-01
Analysis of regional climate simulations to evaluate the ability of 11 Coordinated Regional Climate Downscaling Experiment in South Asia experiments (CORDEX-South Asia) along with their ensemble to produce precipitation from June to September (JJAS) over the Himalayan region have been carried out. These suite of 11 combinations come from 6 regional climate models (RCMs) driven with 10 initial and boundary conditions from different global climate models and are collectively referred here as 11 CORDEX South Asia experiments. All the RCMs use a similar domain and are having similar spatial resolution of 0.44° ( 50 km). The set of experiments are considered to study precipitation sensitivity associated with the Indian summer monsoon (ISM) over the study region. This effort is made as ISM plays a vital role in summertime precipitation over the Himalayan region which acts as driver for the sustenance of habitat, population, crop, glacier, hydrology etc. In addition, so far the summer monsoon precipitation climatology over the Himalayan region has not been studied with the help of CORDEX data. Thus this study is initiated to evaluate the ability of the experiments and their ensemble in reproducing the characteristics of summer monsoon precipitation over Himalayan region, for the present climate (1970-2005). The precipitation climatology, annual precipitation cycles and interannual variabilities from each simulation have been assessed against the gridded observational dataset: Asian Precipitation-Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources for the given time period. Further, after the selection of the better performing experiment the frequency distribution of precipitation was also studied. In this study, an approach has also been made to study the degree of agreement among individual experiments as a way to quantify the uncertainty among them. The experiments though show a wide variation among themselves and individually over time and space in simulating precipitation distribution over the study region, but noticeably along the foothills of the Himalayas all the simulations show dry precipitation bias against the corresponding observation. In addition, as we move towards higher elevation regions these experiments in general show wet bias. The experiment driven by EC-EARTH global climate model and downscaled using Rossby Center regional Atmospheric model version 4 developed by Swedish Meteorological and Hydrological Institute (SMHI-RCA4) simulate precipitation closely in correspondence with the observation. The ensemble outperforms the result of individual experiments. Correspondingly, different kinds of statistical analysis like spatial and temporal correlation, Taylor diagram, frequency distribution and scatter plot have been performed to compare the model output with observation and to explain the associated resemblance, robustness and dynamics statistically. Through the bias and ensemble spread analysis, an estimation of the uncertainty of the model fields and the degree of agreement among them has also been carried out in this study. Overview of the study suggests that these experiments facilitate precipitation evolution and structure over the Himalayan region with certain degree of uncertainty.
Prosthetic Hand Technology-Phase II
2013-02-01
Corporation; model number HV 1100. The motor location of the ring finger was identified and chosen for the experiments. The EMG detection system...model parameters for the experiments, where the subject performed a random series of flexion’s of the ring finger. Figure 1.6 shows the output of the...obtained from specific Motor Unit locations corresponding to the index, middle and ring finger, and the corresponding force data is presented. This is a
NASA Astrophysics Data System (ADS)
Clark, D. S.; Hinkel, D. E.; Eder, D. C.; Jones, O. S.; Haan, S. W.; Hammel, B. A.; Marinak, M. M.; Milovich, J. L.; Robey, H. F.; Suter, L. J.; Town, R. P. J.
2013-05-01
More than two dozen inertial confinement fusion ignition experiments with cryogenic deuterium-tritium layers have now been performed on the National Ignition Facility (NIF) [G. H. Miller et al., Opt. Eng. 443, 2841 (2004)]. Each of these yields a wealth of data including neutron yield, neutron down-scatter fraction, burn-averaged ion temperature, x-ray image shape and size, primary and down-scattered neutron image shape and size, etc. Compared to 2-D radiation-hydrodynamics simulations modeling both the hohlraum and the capsule implosion, however, the measured capsule yield is usually lower by a factor of 5 to 10, and the ion temperature varies from simulations, while most other observables are well matched between experiment and simulation. In an effort to understand this discrepancy, we perform detailed post-shot simulations of a subset of NIF implosion experiments. Using two-dimensional HYDRA simulations [M. M. Marinak, et al., Phys. Plasmas 8, 2275 (2001).] of the capsule only, these simulations represent as accurately as possible the conditions of a given experiment, including the as-shot capsule metrology, capsule surface roughness, and ice layer defects as seeds for the growth of hydrodynamic instabilities. The radiation drive used in these capsule-only simulations can be tuned to reproduce quite well the measured implosion timing, kinematics, and low-mode asymmetry. In order to simulate the experiments as accurately as possible, a limited number of fully three-dimensional implosion simulations are also being performed. Despite detailed efforts to incorporate all of the effects known and believed to be important in determining implosion performance, substantial yield discrepancies remain between experiment and simulation. Some possible alternate scenarios and effects that could resolve this discrepancy are discussed.
Miskovic, Ljubisa; Alff-Tuomala, Susanne; Soh, Keng Cher; Barth, Dorothee; Salusjärvi, Laura; Pitkänen, Juha-Pekka; Ruohonen, Laura; Penttilä, Merja; Hatzimanikatis, Vassily
2017-01-01
Recent advancements in omics measurement technologies have led to an ever-increasing amount of available experimental data that necessitate systems-oriented methodologies for efficient and systematic integration of data into consistent large-scale kinetic models. These models can help us to uncover new insights into cellular physiology and also to assist in the rational design of bioreactor or fermentation processes. Optimization and Risk Analysis of Complex Living Entities (ORACLE) framework for the construction of large-scale kinetic models can be used as guidance for formulating alternative metabolic engineering strategies. We used ORACLE in a metabolic engineering problem: improvement of the xylose uptake rate during mixed glucose-xylose consumption in a recombinant Saccharomyces cerevisiae strain. Using the data from bioreactor fermentations, we characterized network flux and concentration profiles representing possible physiological states of the analyzed strain. We then identified enzymes that could lead to improved flux through xylose transporters (XTR). For some of the identified enzymes, including hexokinase (HXK), we could not deduce if their control over XTR was positive or negative. We thus performed a follow-up experiment, and we found out that HXK2 deletion improves xylose uptake rate. The data from the performed experiments were then used to prune the kinetic models, and the predictions of the pruned population of kinetic models were in agreement with the experimental data collected on the HXK2 -deficient S. cerevisiae strain. We present a design-build-test cycle composed of modeling efforts and experiments with a glucose-xylose co-utilizing recombinant S. cerevisiae and its HXK2 -deficient mutant that allowed us to uncover interdependencies between upper glycolysis and xylose uptake pathway. Through this cycle, we also obtained kinetic models with improved prediction capabilities. The present study demonstrates the potential of integrated "modeling and experiments" systems biology approaches that can be applied for diverse applications ranging from biotechnology to drug discovery.
NASA Instep/mdmsc Jitter Suppression Experiment (JITTER)
NASA Technical Reports Server (NTRS)
White, Edward V.
1992-01-01
The objectives are the following: (1) to develop and demonstrate in-space performance of both passive and active damping systems for suppression of micro-amplitude vibration on an actual application structure and operate despite uncertain dynamics and uncertain disturbance characteristics; and (2) to correlate ground and in-space performance - the performance metric is vibration attenuation. The goals are to achieve vibration suppression equivalent to 5 percent passive damping in selected models and 15 percent active damping in selected modes. Various aspects of this experiment are presented in viewgraph form.
Turbulence modeling and experiments
NASA Technical Reports Server (NTRS)
Shabbir, Aamir
1992-01-01
The best way of verifying turbulence is to do a direct comparison between the various terms and their models. The success of this approach depends upon the availability of the data for the exact correlations (both experimental and DNS). The other approach involves numerically solving the differential equations and then comparing the results with the data. The results of such a computation will depend upon the accuracy of all the modeled terms and constants. Because of this it is sometimes difficult to find the cause of a poor performance by a model. However, such a calculation is still meaningful in other ways as it shows how a complete Reynolds stress model performs. Thirteen homogeneous flows are numerically computed using the second order closure models. We concentrate only on those models which use a linear (or quasi-linear) model for the rapid term. This, therefore, includes the Launder, Reece and Rodi (LRR) model; the isotropization of production (IP) model; and the Speziale, Sarkar, and Gatski (SSG) model. Which of the three models performs better is examined along with what are their weaknesses, if any. The other work reported deal with the experimental balances of the second moment equations for a buoyant plume. Despite the tremendous amount of activity toward the second order closure modeling of turbulence, very little experimental information is available about the budgets of the second moment equations. Part of the problem stems from our inability to measure the pressure correlations. However, if everything else appearing in these equations is known from the experiment, pressure correlations can be obtained as the closing terms. This is the closest we can come to in obtaining these terms from experiment, and despite the measurement errors which might be present in such balances, the resulting information will be extremely useful for the turbulence modelers. The purpose of this part of the work was to provide such balances of the Reynolds stress and heat flux equations for the buoyant plume.
Bell, M A; Fox, N A
1997-12-01
This work was designed to investigate individual differences in hands-and-knees crawling and frontal brain electrical activity with respect to object permanence performance in 76 eight-month-old infants. Four groups of infants (one prelocomotor and 3 with varying lengths of hands-and-knees crawling experience) were tested on an object permanence scale in a research design similar to that used by Kermoian and Campos (1988). In addition, baseline EEG was recorded and used as an indicator of brain development, as in the Bell and Fox (1992) longitudinal study. Individual differences in frontal and occipital EEG power and in locomotor experience were associated with performance on the object permanence task. Infants successful at A-not-B exhibited greater frontal EEG power and greater occipital EEG power than unsuccessful infants. In contrast to Kermoian and Campos (1988), who noted that long-term crawling experience was associated with higher performance on an object permanence scale, infants in this study with any amount of hands and knees crawling experience performed at a higher level on the object permanence scale than prelocomotor infants. There was no interaction among brain electrical activity, locomotor experience, and object permanence performance. These data highlight the value of electrophysiological research and the need for a brain-behavior model of object permanence performance that incorporates both electrophysiological and behavioral factors.
NASA Technical Reports Server (NTRS)
Stefanescu, D. M.; Catalina, A. V.; Juretzko, Frank R.; Sen, Subhayu; Curreri, P. A.
2003-01-01
The objective of the work on Particle Engulfment and Pushing by Solidifying Interfaces (PEP) include: 1) to obtain fundamental understanding of the physics of particle pushing and engulfment, 2) to develop mathematical models to describe the phenomenon, and 3) to perform critical experiments in the microgravity environment of space to provide benchmark data for model validation. Successful completion of this project will yield vital information relevant to a diverse area of terrestrial applications. With PEP being a long term research effort, this report will focus on advances in the theoretical treatment of the solid/liquid interface interaction with an approaching particle, experimental validation of some aspects of the developed models, and the experimental design aspects of future experiments to be performed on board the International Space Station.
Network congestion control algorithm based on Actor-Critic reinforcement learning model
NASA Astrophysics Data System (ADS)
Xu, Tao; Gong, Lina; Zhang, Wei; Li, Xuhong; Wang, Xia; Pan, Wenwen
2018-04-01
Aiming at the network congestion control problem, a congestion control algorithm based on Actor-Critic reinforcement learning model is designed. Through the genetic algorithm in the congestion control strategy, the network congestion problems can be better found and prevented. According to Actor-Critic reinforcement learning, the simulation experiment of network congestion control algorithm is designed. The simulation experiments verify that the AQM controller can predict the dynamic characteristics of the network system. Moreover, the learning strategy is adopted to optimize the network performance, and the dropping probability of packets is adaptively adjusted so as to improve the network performance and avoid congestion. Based on the above finding, it is concluded that the network congestion control algorithm based on Actor-Critic reinforcement learning model can effectively avoid the occurrence of TCP network congestion.
Models for 31-Mode PVDF Energy Harvester for Wearable Applications
Zhao, Jingjing; You, Zheng
2014-01-01
Currently, wearable electronics are increasingly widely used, leading to an increasing need of portable power supply. As a clean and renewable power source, piezoelectric energy harvester can transfer mechanical energy into electric energy directly, and the energy harvester based on polyvinylidene difluoride (PVDF) operating in 31-mode is appropriate to harvest energy from human motion. This paper established a series of theoretical models to predict the performance of 31-mode PVDF energy harvester. Among them, the energy storage one can predict the collected energy accurately during the operation of the harvester. Based on theoretical study and experiments investigation, two approaches to improve the energy harvesting performance have been found. Furthermore, experiment results demonstrate the high accuracies of the models, which are better than 95%. PMID:25114981
Modeling, system identification, and control of ASTREX
NASA Technical Reports Server (NTRS)
Abhyankar, Nandu S.; Ramakrishnan, J.; Byun, K. W.; Das, A.; Cossey, Derek F.; Berg, J.
1993-01-01
The modeling, system identification and controller design aspects of the ASTREX precision space structure are presented in this work. Modeling of ASTREX is performed using NASTRAN, TREETOPS and I-DEAS. The models generated range from simple linear time-invariant models to nonlinear models used for large angle simulations. Identification in both the time and frequency domains are presented. The experimental set up and the results from the identification experiments are included. Finally, controller design for ASTREX is presented. Simulation results using this optimal controller demonstrate the controller performance. Finally the future directions and plans for the facility are addressed.
Methylphenidate improves performance on the radial arm maze in periadolescent rats
Dow-Edwards, Diana L.; Weedon, Jeremy C.; Hellmann, Esther
2008-01-01
Methylphenidate (Ritalin; MPD) is one of the most commonly prescribed drugs in childhood and adolescence and many clinical studies have documented its efficacy. Due to the limitations of conducting invasive research in humans, animal models can be beneficial for studying drug effects. However, few animal studies have demonstrated the effects of methylphenidate on cognitive processes. The objective of this study was to find a dose of methylphenidate that was effective in improving performance on a spatial working memory cognitive task when administered orally to periadolescent rats. Therefore, we dosed subjects with methylphenidate at 1 or 3 mg/kg/day via gastric intubation from postnatal day 22 to 59 and assessed the effects of the drug on performance on the radial arm maze each day. To enhance performance overall, a second experiment was conducted where the subjects were moderately food restricted (to 90% of the free-feeding weight). Results of Experiment 1 show that during the first week of testing only the 3mg/kg MPD-treated males showed improved performance (entries prior to repeated entry) when ad-lib fed and housed in pairs while the same dose significantly improved performance in both males and females under conditions of food-restriction and individual housing in Experiment 2. MPD also produced a pattern of increased errors and arms entered during the first week, especially in Experiment 2. MPD increased locomotor activity when tested at postnatal day 60 in both experiments. The data suggest that 3mg/kg oral methylphenidate improves performance on a spatial cognitive task only early in treatment in the rat. While males show improvement under conditions of both high and low motivation, females only show MPD effects when highly motivated. Hypothetically, methylphenidate may improve radial arm maze performance through increased attention and improved spatial working memory and/or alterations in locomotion, reactivity to novelty or anxiety. Regardless, the study supports the utility of the rat as a suitable model to examine the effects of low dose oral MPD. PMID:18538539
Vuckovic, Anita; Kwantes, Peter J; Neal, Andrew
2013-09-01
Research has identified a wide range of factors that influence performance in relative judgment tasks. However, the findings from this research have been inconsistent. Studies have varied with respect to the identification of causal variables and the perceptual and decision-making mechanisms underlying performance. Drawing on the ecological rationality approach, we present a theory of the judgment and decision-making processes involved in a relative judgment task that explains how people judge a stimulus and adapt their decision process to accommodate their own uncertainty associated with those judgments. Undergraduate participants performed a simulated air traffic control conflict detection task. Across two experiments, we systematically manipulated variables known to affect performance. In the first experiment, we manipulated the relative distances of aircraft to a common destination while holding aircraft speeds constant. In a follow-up experiment, we introduced a direct manipulation of relative speed. We then fit a sequential sampling model to the data, and used the best fitting parameters to infer the decision-making processes responsible for performance. Findings were consistent with the theory that people adapt to their own uncertainty by adjusting their criterion and the amount of time they take to collect evidence in order to make a more accurate decision. From a practical perspective, the paper demonstrates that one can use a sequential sampling model to understand performance in a dynamic environment, allowing one to make sense of and interpret complex patterns of empirical findings that would otherwise be difficult to interpret using standard statistical analyses. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Jorge, Inmaculada; Navarro, Pedro; Martínez-Acedo, Pablo; Núñez, Estefanía; Serrano, Horacio; Alfranca, Arántzazu; Redondo, Juan Miguel; Vázquez, Jesús
2009-01-01
Statistical models for the analysis of protein expression changes by stable isotope labeling are still poorly developed, particularly for data obtained by 16O/18O labeling. Besides large scale test experiments to validate the null hypothesis are lacking. Although the study of mechanisms underlying biological actions promoted by vascular endothelial growth factor (VEGF) on endothelial cells is of considerable interest, quantitative proteomics studies on this subject are scarce and have been performed after exposing cells to the factor for long periods of time. In this work we present the largest quantitative proteomics study to date on the short term effects of VEGF on human umbilical vein endothelial cells by 18O/16O labeling. Current statistical models based on normality and variance homogeneity were found unsuitable to describe the null hypothesis in a large scale test experiment performed on these cells, producing false expression changes. A random effects model was developed including four different sources of variance at the spectrum-fitting, scan, peptide, and protein levels. With the new model the number of outliers at scan and peptide levels was negligible in three large scale experiments, and only one false protein expression change was observed in the test experiment among more than 1000 proteins. The new model allowed the detection of significant protein expression changes upon VEGF stimulation for 4 and 8 h. The consistency of the changes observed at 4 h was confirmed by a replica at a smaller scale and further validated by Western blot analysis of some proteins. Most of the observed changes have not been described previously and are consistent with a pattern of protein expression that dynamically changes over time following the evolution of the angiogenic response. With this statistical model the 18O labeling approach emerges as a very promising and robust alternative to perform quantitative proteomics studies at a depth of several thousand proteins. PMID:19181660
First Results of the Regional Earthquake Likelihood Models Experiment
Schorlemmer, D.; Zechar, J.D.; Werner, M.J.; Field, E.H.; Jackson, D.D.; Jordan, T.H.
2010-01-01
The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment-a truly prospective earthquake prediction effort-is underway within the U. S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary-the forecasts were meant for an application of 5 years-we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one. ?? 2010 The Author(s).
Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS
NASA Astrophysics Data System (ADS)
Barani, T.; Bruschi, E.; Pizzocri, D.; Pastore, G.; Van Uffelen, P.; Williamson, R. L.; Luzzi, L.
2017-04-01
The modelling of fission gas behaviour is a crucial aspect of nuclear fuel performance analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. In particular, experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of the burst release process in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which is applied as an extension of conventional diffusion-based models to introduce the burst release effect. The concept and governing equations of the model are presented, and the sensitivity of results to the newly introduced parameters is evaluated through an analytic sensitivity analysis. The model is assessed for application to integral fuel rod analysis by implementation in two structurally different fuel performance codes: BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D code). Model assessment is based on the analysis of 19 light water reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the quantitative predictions of integral fuel rod FGR and the qualitative representation of the FGR kinetics with the transient model relative to the canonical, purely diffusion-based models of the codes. The overall quantitative improvement of the integral FGR predictions in the two codes is comparable. Moreover, calculated radial profiles of xenon concentration after irradiation are investigated and compared to experimental data, illustrating the underlying representation of the physical mechanisms of burst release.
Performance analysis of jump-gliding locomotion for miniature robotics.
Vidyasagar, A; Zufferey, Jean-Christohphe; Floreano, Dario; Kovač, M
2015-03-26
Recent work suggests that jumping locomotion in combination with a gliding phase can be used as an effective mobility principle in robotics. Compared to pure jumping without a gliding phase, the potential benefits of hybrid jump-gliding locomotion includes the ability to extend the distance travelled and reduce the potentially damaging impact forces upon landing. This publication evaluates the performance of jump-gliding locomotion and provides models for the analysis of the relevant dynamics of flight. It also defines a jump-gliding envelope that encompasses the range that can be achieved with jump-gliding robots and that can be used to evaluate the performance and improvement potential of jump-gliding robots. We present first a planar dynamic model and then a simplified closed form model, which allow for quantification of the distance travelled and the impact energy on landing. In order to validate the prediction of these models, we validate the model with experiments using a novel jump-gliding robot, named the 'EPFL jump-glider'. It has a mass of 16.5 g and is able to perform jumps from elevated positions, perform steered gliding flight, land safely and traverse on the ground by repetitive jumping. The experiments indicate that the developed jump-gliding model fits very well with the measured flight data using the EPFL jump-glider, confirming the benefits of jump-gliding locomotion to mobile robotics. The jump-glide envelope considerations indicate that the EPFL jump-glider, when traversing from a 2 m height, reaches 74.3% of optimal jump-gliding distance compared to pure jumping without a gliding phase which only reaches 33.4% of the optimal jump-gliding distance. Methods of further improving flight performance based on the models and inspiration from biological systems are presented providing mechanical design pathways to future jump-gliding robot designs.
NASA Astrophysics Data System (ADS)
Popke, Dagmar; Bony, Sandrine; Mauritsen, Thorsten; Stevens, Bjorn
2015-04-01
Model simulations with state-of-the-art general circulation models reveal a strong disagreement concerning the simulated regional precipitation patterns and their changes with warming. The deviating precipitation response even persists when reducing the model experiment complexity to aquaplanet simulation with forced sea surface temperatures (Stevens and Bony, 2013). To assess feedbacks between clouds and radiation on precipitation responses we analyze data from 5 models performing the aquaplanet simulations of the Clouds On Off Klima Intercomparison Experiment (COOKIE), where the interaction of clouds and radiation is inhibited. Although cloud radiative effects are then disabled, the precipitation patterns among models are as diverse as with cloud radiative effects switched on. Disentangling differing model responses in such simplified experiments thus appears to be key to better understanding the simulated regional precipitation in more standard configurations. By analyzing the local moisture and moist static energy budgets in the COOKIE experiments we investigate likely causes for the disagreement among models. References Stevens, B. & S. Bony: What Are Climate Models Missing?, Science, 2013, 340, 1053-1054
A prospective earthquake forecast experiment in the western Pacific
NASA Astrophysics Data System (ADS)
Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan
2012-09-01
Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.
The Objectives of NASA's Living with a Star Space Environment Testbed
NASA Technical Reports Server (NTRS)
Barth, Janet L.; LaBel, Kenneth A.; Brewer, Dana; Kauffman, Billy; Howard, Regan; Griffin, Geoff; Day, John H. (Technical Monitor)
2001-01-01
NASA is planning to fly a series of Space Environment Testbeds (SET) as part of the Living With A Star (LWS) Program. The goal of the testbeds is to improve and develop capabilities to mitigate and/or accommodate the affects of solar variability in spacecraft and avionics design and operation. This will be accomplished by performing technology validation in space to enable routine operations, characterize technology performance in space, and improve and develop models, guidelines and databases. The anticipated result of the LWS/SET program is improved spacecraft performance, design, and operation for survival of the radiation, spacecraft charging, meteoroid, orbital debris and thermosphere/ionosphere environments. The program calls for a series of NASA Research Announcements (NRAs) to be issued to solicit flight validation experiments, improvement in environment effects models and guidelines, and collateral environment measurements. The selected flight experiments may fly on the SET experiment carriers and flights of opportunity on other commercial and technology missions. This paper presents the status of the project so far, including a description of the types of experiments that are intended to fly on SET-1 and a description of the SET-1 carrier parameters.
Ultrasonic brain therapy: First trans-skull in vivo experiments on sheep using adaptive focusing
NASA Astrophysics Data System (ADS)
Pernot, Mathieu; Aubry, Jean-Francois; Tanter, Michael; Fink, Mathias; Boch, Anne-Laure; Kujas, Michèle
2004-05-01
A high-power prototype dedicated to trans-skull therapy has been tested in vivo on 20 sheep. The array is made of 200 high-power transducers working at 1-MHz central and is able to reach 260 bars at focus in water. An echographic array connected to a Philips HDI 1000 system has been inserted in the therapeutic array in order to perform real-time monitoring of the treatment. A complete craniotomy has been performed on half of the treated animal models in order to get a reference model. On the other animals, a minimally invasive surgery has been performed thanks to a time-reversal experiment: a hydrophone was inserted at the target inside the brain thanks to a 1-mm2 craniotomy. A time-reversal experiment was then conducted through the skull bone with the therapeutic array to treat the targeted point. For all the animals a specified region around the target was treated thanks to electronic beam steering. Animals were finally divided into three groups and sacrificed, respectively, 0, 1, and 2 weeks after treatment. Finally, histological examination confirmed tissue damage. These in vivo experiments highlight the strong potential of high-power time-reversal technology.
Off-design Performance Analysis of Multi-Stage Transonic Axial Compressors
NASA Astrophysics Data System (ADS)
Du, W. H.; Wu, H.; Zhang, L.
Because of the complex flow fields and component interaction in modern gas turbine engines, they require extensive experiment to validate performance and stability. The experiment process can become expensive and complex. Modeling and simulation of gas turbine engines are way to reduce experiment costs, provide fidelity and enhance the quality of essential experiment. The flow field of a transonic compressor contains all the flow aspects, which are difficult to present-boundary layer transition and separation, shock-boundary layer interactions, and large flow unsteadiness. Accurate transonic axial compressor off-design performance prediction is especially difficult, due in large part to three-dimensional blade design and the resulting flow field. Although recent advancements in computer capacity have brought computational fluid dynamics to forefront of turbomachinery design and analysis, the grid and turbulence model still limit Reynolds-average Navier-Stokes (RANS) approximations in the multi-stage transonic axial compressor flow field. Streamline curvature methods are still the dominant numerical approach as an important tool for turbomachinery to analyze and design, and it is generally accepted that streamline curvature solution techniques will provide satisfactory flow prediction as long as the losses, deviation and blockage are accurately predicted.
Binocular combination of luminance profiles
Ding, Jian; Levi, Dennis M.
2017-01-01
We develop and test a new two-dimensional model for binocular combination of the two eyes' luminance profiles. For first-order stimuli, the model assumes that one eye's luminance profile first goes through a luminance compressor, receives gain-control and gain-enhancement from the other eye, and then linearly combines the other eye's output profile. For second-order stimuli, rectification is added in the signal path of the model before the binocular combination site. Both the total contrast and luminance energies, weighted sums over both the space and spatial-frequency domains, were used in the interocular gain-control, while only the total contrast energy was used in the interocular gain-enhancement. To challenge the model, we performed a binocular brightness matching experiment over a large range of background and target luminances. The target stimulus was a dichoptic disc with a sharp edge that has an increment or decrement luminance from its background. The disk's interocular luminance ratio varied from trial to trial. To refine the model we tested three luminance compressors, five nested binocular combination models (including the Ding–Sperling and the DSKL models), and examined the presence or absence of total luminance energy in the model. We found that (1) installing a luminance compressor, either a logarithmic luminance function or luminance gain-control, (2) including both contrast and luminance energies, and (3) adding interocular gain-enhancement (the DSKL model) to a combined model significantly improved its performance. The combined model provides a systematic account of binocular luminance summation over a large range of luminance input levels. It gives a unified explanation of Fechner's paradox observed on a dark background, and a winner-take-all phenomenon observed on a light background. To further test the model, we conducted two additional experiments: luminance summation of discs with asymmetric contour information (Experiment 2), similar to Levelt (1965) and binocular combination of second-order contrast-modulated gratings (Experiment 3). We used the model obtained in Experiment 1 to predict the results of Experiments 2 and 3 and the results of our previous studies. Model simulations further refined the contrast space weight and contrast sensitivity functions that are installed in the model, and provide a reasonable account for rebalancing of imbalanced binocular vision by reducing the mean luminance in the dominant eye. PMID:29098293
Image Discrimination Predictions of a Single Channel Model with Contrast Gain Control
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Null, Cynthia H.
1995-01-01
Image discrimination models predict the number of just-noticeable-differences between two images. We report the predictions of a single channel model with contrast masking for a range of standard discrimination experiments. Despite its computational simplicity, this model has performed as well as a multiple channel model in an object detection task.
NASA Technical Reports Server (NTRS)
Prive, Nikki; Errico, R. M.; Carvalho, D.
2018-01-01
The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) has spent more than a decade developing and implementing a global Observing System Simulation Experiment framework for use in evaluting both new observation types as well as the behavior of data assimilation systems. The NASA/GMAO OSSE has constantly evolved to relect changes in the Gridpoint Statistical Interpolation data assimiation system, the Global Earth Observing System model, version 5 (GEOS-5), and the real world observational network. Software and observational datasets for the GMAO OSSE are publicly available, along with a technical report. Substantial modifications have recently been made to the NASA/GMAO OSSE framework, including the character of synthetic observation errors, new instrument types, and more sophisticated atmospheric wind vectors. These improvements will be described, along with the overall performance of the current OSSE. Lessons learned from investigations into correlated errors and model error will be discussed.
Deflagration to Detonation Transition (DDT) Simulations of HMX Powder Using the HERMES Model
NASA Astrophysics Data System (ADS)
White, Bradley; Reaugh, John; Tringe, Joseph
2017-06-01
We performed computer simulations of DDT experiments with Class I HMX powder using the HERMES model (High Explosive Response to MEchanical Stimulus) in ALE3D. Parameters for the model were fitted to the limited available mechanical property data of the low-density powder, and to the Shock to Detonation Transition (SDT) test results. The DDT tests were carried out in steel-capped polycarbonate tubes. This arrangement permits direct observation of the event using both flash X-ray radiography and high speed camera imaging, and provides a stringent test of the model. We found the calculated detonation transition to be qualitatively similar to experiment. Through simulation we also explored the effects of confinement strength, the HMX particle size distribution and porosity on the computed detonation transition location. This work was performed under the auspices of the US DOE by LLNL under Contract DE-AC52-07NA27344.
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Choi, Sang H.; Chrisman, Dan A., Jr.; Samms, Richard W.
1987-01-01
Dynamic models and computer simulations were developed for the radiometric sensors utilized in the Earth Radiation Budget Experiment (ERBE). The models were developed to understand performance, improve measurement accuracy by updating model parameters and provide the constants needed for the count conversion algorithms. Model simulations were compared with the sensor's actual responses demonstrated in the ground and inflight calibrations. The models consider thermal and radiative exchange effects, surface specularity, spectral dependence of a filter, radiative interactions among an enclosure's nodes, partial specular and diffuse enclosure surface characteristics and steady-state and transient sensor responses. Relatively few sensor nodes were chosen for the models since there is an accuracy tradeoff between increasing the number of nodes and approximating parameters such as the sensor's size, material properties, geometry, and enclosure surface characteristics. Given that the temperature gradients within a node and between nodes are small enough, approximating with only a few nodes does not jeopardize the accuracy required to perform the parameter estimates and error analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Jovanca J.; Bishop, Joseph E.
2013-11-01
This report summarizes the work performed by the graduate student Jovanca Smith during a summer internship in the summer of 2012 with the aid of mentor Joe Bishop. The projects were a two-part endeavor that focused on the use of the numerical model called the Lattice Discrete Particle Model (LDPM). The LDPM is a discrete meso-scale model currently used at Northwestern University and the ERDC to model the heterogeneous quasi-brittle material, concrete. In the first part of the project, LDPM was compared to the Karagozian and Case Concrete Model (K&C) used in Presto, an explicit dynamics finite-element code, developed atmore » Sandia National Laboratories. In order to make this comparison, a series of quasi-static numerical experiments were performed, namely unconfined uniaxial compression tests on four varied cube specimen sizes, three-point bending notched experiments on three proportional specimen sizes, and six triaxial compression tests on a cylindrical specimen. The second part of this project focused on the application of LDPM to simulate projectile perforation on an ultra high performance concrete called CORTUF. This application illustrates the strengths of LDPM over traditional continuum models.« less
Hands-On Exercise in Environmental Structural Geology Using a Fracture Block Model.
ERIC Educational Resources Information Center
Gates, Alexander E.
2001-01-01
Describes the use of a scale analog model of an actual fractured rock reservoir to replace paper copies of fracture maps in the structural geology curriculum. Discusses the merits of the model in enabling students to gain experience performing standard structural analyses. (DDR)
A Mercury Model of Atmospheric Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Alex B.; Chodash, Perry A.; Procassini, R. J.
Using the particle transport code Mercury, accurate models were built of the two sources used in Operation BREN, a series of radiation experiments performed by the United States during the 1960s. In the future, these models will be used to validate Mercury’s ability to simulate atmospheric transport.
NASA Astrophysics Data System (ADS)
Zhang, X.; Burgstaller, R.; Lai, X.; Gehrer, A.; Kefalas, A.; Pang, Y.
2016-11-01
The performance discontinuity of a pump-turbine under pumping mode is harmful to stable operation of units in hydropower station. In this paper, the performance discontinuity phenomenon of the pump-turbine was studied by means of experiment and numerical simulation. In the experiment, characteristics of the pump-turbine with different diffuser vane openings were tested in order to investigate the effect of pumping casing to the performance discontinuity. While other effects such as flow separation and rotating stall are known to have an effect on the discontinuity, the present studied test cases show that prerotation is the dominating effect for the instability, positions of the positive slope of characteristics are almost the same in different diffuser vane opening conditions. The impeller has principal effect to the performance discontinuity. In the numerical simulation, CFD analysis of tested pump-turbine has been done with k-ω and SST turbulence model. It is found that the position of performance curve discontinuity corresponds to flow recirculation at impeller inlet. Flow recirculation at impeller inlet is the cause of the discontinuity of characteristics curve. It is also found that the operating condition of occurrence of flow recirculation at impeller inlet is misestimated with k-ω and SST turbulence model. Furthermore, the original SST model has been modified. We predict the occurrence position of flow recirculation at impeller inlet correctly with the modified SST turbulence model, and it also can improve the prediction accuracy of the pump- turbine performance at the same time.
Barbisan, M; Zaniol, B; Pasqualotto, R
2014-11-01
A test facility for the development of the neutral beam injection system for ITER is under construction at Consorzio RFX. It will host two experiments: SPIDER, a 100 keV H(-)/D(-) ion RF source, and MITICA, a prototype of the full performance ITER injector (1 MV, 17 MW beam). A set of diagnostics will monitor the operation and allow to optimize the performance of the two prototypes. In particular, beam emission spectroscopy will measure the uniformity and the divergence of the fast particles beam exiting the ion source and travelling through the beam line components. This type of measurement is based on the collection of the Hα/Dα emission resulting from the interaction of the energetic particles with the background gas. A numerical model has been developed to simulate the spectrum of the collected emissions in order to design this diagnostic and to study its performance. The paper describes the model at the base of the simulations and presents the modeled Hα spectra in the case of MITICA experiment.
A model to forecast data centre infrastructure costs.
NASA Astrophysics Data System (ADS)
Vernet, R.
2015-12-01
The computing needs in the HEP community are increasing steadily, but the current funding situation in many countries is tight. As a consequence experiments, data centres, and funding agencies have to rationalize resource usage and expenditures. CC-IN2P3 (Lyon, France) provides computing resources to many experiments including LHC, and is a major partner for astroparticle projects like LSST, CTA or Euclid. The financial cost to accommodate all these experiments is substantial and has to be planned well in advance for funding and strategic reasons. In that perspective, leveraging infrastructure expenses, electric power cost and hardware performance observed in our site over the last years, we have built a model that integrates these data and provides estimates of the investments that would be required to cater to the experiments for the mid-term future. We present how our model is built and the expenditure forecast it produces, taking into account the experiment roadmaps. We also examine the resource growth predicted by our model over the next years assuming a flat-budget scenario.
Modeling and characterization of supercapacitors for wireless sensor network applications
NASA Astrophysics Data System (ADS)
Zhang, Ying; Yang, Hengzhao
A simple circuit model is developed to describe supercapacitor behavior, which uses two resistor-capacitor branches with different time constants to characterize the charging and redistribution processes, and a variable leakage resistance to characterize the self-discharge process. The parameter values of a supercapacitor can be determined by a charging-redistribution experiment and a self-discharge experiment. The modeling and characterization procedures are illustrated using a 22F supercapacitor. The accuracy of the model is compared with that of other models often used in power electronics applications. The results show that the proposed model has better accuracy in characterizing the self-discharge process while maintaining similar performance as other models during charging and redistribution processes. Additionally, the proposed model is evaluated in a simplified energy storage system for self-powered wireless sensors. The model performance is compared with that of a commonly used energy recursive equation (ERE) model. The results demonstrate that the proposed model can predict the evolution profile of voltage across the supercapacitor more accurately than the ERE model, and therefore provides a better alternative for supporting research on storage system design and power management for wireless sensor networks.
Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.
NASA Astrophysics Data System (ADS)
Macias, J.; Escalante, C.; Castro, M. J.
2017-12-01
Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).
Telescope performance and image simulations of the balloon-borne coded-mask protoMIRAX experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penacchioni, A. V., E-mail: ana.penacchioni@inpe.br; Braga, J., E-mail: joao.braga@inpe.br; Castro, M. A., E-mail: manuel.castro@inpe.br
2015-12-17
In this work we present the results of imaging simulations performed with the help of the GEANT4 package for the protoMIRAX hard X-ray balloon experiment. The instrumental background was simulated taking into account the various radiation components and their angular dependence, as well as a detailed mass model of the experiment. We modelled the meridian transits of the Crab Nebula and the Galactic Centre (CG) region during balloon flights in Brazil (∼ −23° of latitude and an altitude of ∼40 km) and introduced the correspondent spectra as inputs to the imaging simulations. We present images of the Crab and ofmore » three sources in the GC: 1E 1740.7-2942, GRS 1758-258 and GX 1+4. The results show that the protoMIRAX experiment is capable of making spectral and timing observations of bright hard X-ray sources as well as important imaging demonstrations that will contribute to the design of the MIRAX satellite mission.« less
Richter, Michael
2010-05-01
Two experiments assessed the moderating impact of task context on the relationship between reward and cardiovascular response. Randomly assigned to the cells of a 2 (task context: reward vs. demand) x 2 (reward value: low vs. high) between-persons design, participants performed either a memory task with an unclear performance standard (Experiment 1) or a visual scanning task with an unfixed performance standard (Experiment 2). Before performing the task--where participants could earn either a low or a high reward--participants responded to questions about either task reward or task demand. In accordance with the theoretical predictions derived from Wright's (1996) integrative model, reactivity of pre-ejection period increased with reward value if participants had rated aspects of task reward before performing the task. If they had rated task demand, pre-ejection period did not differ as a function of reward. Copyright 2010 Elsevier B.V. All rights reserved.
The empathy impulse: A multinomial model of intentional and unintentional empathy for pain.
Cameron, C Daryl; Spring, Victoria L; Todd, Andrew R
2017-04-01
Empathy for pain is often described as automatic. Here, we used implicit measurement and multinomial modeling to formally quantify unintentional empathy for pain: empathy that occurs despite intentions to the contrary. We developed the pain identification task (PIT), a sequential priming task wherein participants judge the painfulness of target experiences while trying to avoid the influence of prime experiences. Using multinomial modeling, we distinguished 3 component processes underlying PIT performance: empathy toward target stimuli (Intentional Empathy), empathy toward prime stimuli (Unintentional Empathy), and bias to judge target stimuli as painful (Response Bias). In Experiment 1, imposing a fast (vs. slow) response deadline uniquely reduced Intentional Empathy. In Experiment 2, inducing imagine-self (vs. imagine-other) perspective-taking uniquely increased Unintentional Empathy. In Experiment 3, Intentional and Unintentional Empathy were stronger toward targets with typical (vs. atypical) pain outcomes, suggesting that outcome information matters and that effects on the PIT are not reducible to affective priming. Typicality of pain outcomes more weakly affected task performance when target stimuli were merely categorized rather than judged for painfulness, suggesting that effects on the latter are not reducible to semantic priming. In Experiment 4, Unintentional Empathy was stronger for participants who engaged in costly donation to cancer charities, but this parameter was also high for those who donated to an objectively worse but socially more popular charity, suggesting that overly high empathy may facilitate maladaptive altruism. Theoretical and practical applications of our modeling approach for understanding variation in empathy are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
2017-01-01
Objectives The purpose of this study was to introduce our three experiments on bone morphogenetic protein (BMP) and its carriers performed using the critical sized segmental defect (CSD) model in rat fibula and to investigate development of animal models and carriers for more effective bone regeneration. Materials and Methods For the experiments, 14, 16, and 24 rats with CSDs on both fibulae were used in Experiments 1, 2, and 3, respectively. BMP-2 with absorbable collagen sponge (ACS) (Experiments 1 and 2), autoclaved autogenous bone (AAB) and fibrin glue (FG) (Experiment 3), and xenogenic bone (Experiment 2) were used in the experimental groups. Radiographic and histomorphological evaluations were performed during the follow-up period of each experiment. Results Significant new bone formation was commonly observed in all experimental groups using BMP-2 compared to control and xenograft (porcine bone) groups. Although there was some difference based on BMP carrier, regenerated bone volume was typically reduced by remodeling after initially forming excessive bone. Conclusion BMP-2 demonstrates excellent ability for bone regeneration because of its osteoinductivity, but efficacy can be significantly different depending on its delivery system. ACS and FG showed relatively good bone regeneration capacity, satisfying the essential conditions of localization and release-control when used as BMP carriers. AAB could not provide release-control as a BMP carrier, but its space-maintenance role was remarkable. Carriers and scaffolds that can provide sufficient support to the BMP/carrier complex are necessary for large bone defects, and AAB is thought to be able to act as an effective scaffold. The CSD model of rat fibula is simple and useful for initial estimate of bone regeneration by agents including BMPs. PMID:29333367
Characterization of beryllium deformation using in-situ x-ray diffraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magnuson, Eric Alan; Brown, Donald William; Clausen, Bjorn
2015-08-24
Beryllium’s unique mechanical properties are extremely important in a number of high performance applications. Consequently, accurate models for the mechanical behavior of beryllium are required. However, current models are not sufficiently microstructure aware to accurately predict the performance of beryllium under a range of processing and loading conditions. Previous experiments conducted using the SMARTS and HIPPO instruments at the Lujan Center(LANL), have studied the relationship between strain rate and texture development, but due to the limitations of neutron diffraction studies, it was not possible to measure the response of the material in real-time. In-situ diffraction experiments conducted at the Advancedmore » Photon Source have allowed the real time measurement of the mechanical response of compressed beryllium. Samples of pre-strained beryllium were reloaded orthogonal to their original load path to show the reorientation of already twinned grains. Additionally, the in-situ experiments allowed the real time tracking of twin evolution in beryllium strained at high rates. The data gathered during these experiments will be used in the development and validation of a new, microstructure aware model of the constitutive behavior of beryllium.« less
Payload Planning for the International Space Station
NASA Technical Reports Server (NTRS)
Johnson, Tameka J.
1995-01-01
A review of the evolution of the International Space Station (ISS) was performed for the purpose of understanding the project objectives. It was requested than an analysis of the current Office of Space Access and Technology (OSAT) Partnership Utilization Plan (PUP) traffic model be completed to monitor the process through which the scientific experiments called payloads are manifested for flight to the ISS. A viewing analysis of the ISS was also proposed to identify the capability to observe the United States Laboratory (US LAB) during the assembly sequence. Observations of the Drop-Tower experiment and nondestructive testing procedures were also performed to maximize the intern's technical experience. Contributions were made to the meeting in which the 1996 OSAT or Code X PUP traffic model was generated using the software tool, Filemaker Pro. The current OSAT traffic model satisfies the requirement for manifesting and delivering the proposed payloads to station. The current viewing capability of station provides the ability to view the US LAB during station assembly sequence. The Drop Tower experiment successfully simulates the effect of microgravity and conveniently documents the results for later use. The non-destructive test proved effective in determining stress in various components tested.
Assimilation of satellite altimeter data in a primitive-equation model of the Azores Madeira region
NASA Astrophysics Data System (ADS)
Gavart, Michel; De Mey, Pierre; Caniaux, Guy
1999-07-01
The aim of this study is to implement satellite altimetric assimilation into a high-resolution primitive-equation ocean model and check the validity and sensitivity of the results. Beyond this paper, the remote objective is to get a dynamical tool capable of simulating the surface ocean processes linked to the air-sea interactions as well as to perform mesoscale ocean forecasting. For computational cost and practical reasons, this study takes place in a 1000 by 1000 sq km open domain of the Canary basin. The assimilation experiments are carried out with the combined TOPEX/POSEIDON and ERS-1 data sets between June 1993 and December 1993. The space-time domain overlaps with in situ data collected during the SEMAPHORE experiment and thus enables an objective validation of the results. A special boundary treatment is applied to the model by creating a surrounding recirculating area separated from the interior by a buffer zone. The altimetric assimilation is done by implementing a reduced-order optimal interpolation algorithm with a special vertical projection of the surface model/data misfits. We perform a first experiment with a vertical projection onto an isopycnal EOF representing the Azores Current vertical variability. An objective validation of the model's velocities with Lagrangian float data shows good results (the correlation is 0.715 at 150 dbar). The question of the sensitivity to the vertical projection is addressed by performing similar experiments using a method for lifting/lowering of the water column, and using an EOF in Z-coordinates. Some comparisons with in situ temperature data do not show any significant difference between the three projections, after five months of assimilation. However, in order to preserve the large-scale water characteristics, we felt that the isopycnal projection was a more physically consistent choice. Then, the complementary character of the two satellites is assessed with two additional experiments which use each altimeter data sets separately. There is an evidence of the benefit of combining the two data sets. Otherwise, an experiment assimilating long-wavelength bias-corrected CLS altimetric maps every 10 days exhibits the best correlation scores and emphasizes the importance of reducing the orbit error and biases in the altimetric data sets. The surface layers of the model are forced using realistic daily wind stress values computed from ECMWF analyses. Although we resolve small space and time scales, in our limited domain the wind stress does not significantly influence the quality of the results obtained with the altimetric assimilation. Finally, the relative effects of the data selection procedure and of the integration times (cycle lengths) is explored by performing data window experiments. A value of 10 days seems to be the most satisfactory cycle length.
Design of a 4-DOF MR haptic master for application to robot surgery: virtual environment work
NASA Astrophysics Data System (ADS)
Oh, Jong-Seok; Choi, Seung-Hyun; Choi, Seung-Bok
2014-09-01
This paper presents the design and control performance of a novel type of 4-degrees-of-freedom (4-DOF) haptic master in cyberspace for a robot-assisted minimally invasive surgery (RMIS) application. By using a controllable magnetorheological (MR) fluid, the proposed haptic master can have a feedback function for a surgical robot. Due to the difficulty in utilizing real human organs in the experiment, the cyberspace that features the virtual object is constructed to evaluate the performance of the haptic master. In order to realize the cyberspace, a volumetric deformable object is represented by a shape-retaining chain-linked (S-chain) model, which is a fast volumetric model and is suitable for real-time applications. In the haptic architecture for an RMIS application, the desired torque and position induced from the virtual object of the cyberspace and the haptic master of real space are transferred to each other. In order to validate the superiority of the proposed master and volumetric model, a tracking control experiment is implemented with a nonhomogenous volumetric cubic object to demonstrate that the proposed model can be utilized in real-time haptic rendering architecture. A proportional-integral-derivative (PID) controller is then designed and empirically implemented to accomplish the desired torque trajectories. It has been verified from the experiment that tracking the control performance for torque trajectories from a virtual slave can be successfully achieved.
Watts Bar Lock Valve Model Study
2013-08-01
stream of the valve, and cavitation potential for various valve openings. Design modifications to improve performance were to be recommended. The...operation schedule to avoid gate openings with cavitation problems. This report provides results of the recent model experiments performed on the...the mid range openings. The noise is caused by cavitation in the culvert downstream of the filling valves. The vapor cavities that form from the low
CSEP-Japan: The Japanese node of the collaboratory for the study of earthquake predictability
NASA Astrophysics Data System (ADS)
Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.
2011-12-01
Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project of earthquake predictability research. The final goal of this project is to have a look for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined the CSEP and started the Japanese testing center called as CSEP-Japan. This testing center constitutes an open access to researchers contributing earthquake forecast models for applied to Japan. A total of 91 earthquake forecast models were submitted on the prospective experiment starting from 1 November 2009. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by the CSEP. The experiments of 1-day, 3-month, 1-year and 3-year forecasting classes were implemented for 92 rounds, 4 rounds, 1round and 0 round (now in progress), respectively. The results of the 3-month class gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space-distribution with most models in some cases where many earthquakes occurred at the same spot. Throughout the experiment, it has been clarified that some properties of the CSEP's evaluation tests such as the L-test show strong correlation with the N-test. We are now processing to own (cyber-) infrastructure to support the forecast experiment as follows. (1) Japanese seismicity has changed since the 2011 Tohoku earthquake. The 3rd call for forecasting models was announced in order to promote model improvement for forecasting earthquakes after this earthquake. So, we provide Japanese seismicity catalog maintained by JMA for modelers to study how seismicity changes in Japan. (2) Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. (3) The testing center improved an evaluation system for 1-day class experiment because this testing class required fast calculation ability to finish forecasting and testing results within one day. This development will make a real-time forecasting system come true. (4) The special issue of 1st part titled Earthquake Forecast Testing Experiment in Japan was published on the Earth, Planets and Space Vol. 63, No.3, 2011 on March, 2011. This issue includes papers of algorithm of statistical models participating our experiment and outline of the experiment in Japan. The 2nd part of this issue, which is now on line, will be published soon. In this presentation, we will overview CSEP-Japan and results of the experiments, and discuss direction of our activity. An outline of the experiment and activities of the Japanese Testing Center are published on our WEB site;
Determination of Global Stability of the Slosh Motion in a Spacecraft via Num Erical Experiment
NASA Astrophysics Data System (ADS)
Kang, Ja-Young
2003-12-01
The global stability of the attitude motion of a spin-stabilized space vehicle is investigated by performing numerical experiment. In the previous study, a stationary solution and a particular resonant condition for a given model were found by using analytical method but failed to represent the system stability over parameter values near and off the stationary points. Accordingly, as an extension of the previous work, this study performs numerical experiment to investigate the stability of the system across the parameter space and determines stable and unstable regions of the design parameters of the system.
Does the mean adequately represent reading performance? Evidence from a cross-linguistic study
Marinelli, Chiara V.; Horne, Joanna K.; McGeown, Sarah P.; Zoccolotti, Pierluigi; Martelli, Marialuisa
2014-01-01
Reading models are largely based on the interpretation of average data from normal or impaired readers, mainly drawn from English-speaking individuals. In the present study we evaluated the possible contribution of orthographic consistency in generating individual differences in reading behavior. We compared the reading performance of young adults speaking English (one of the most irregular orthographies) and Italian (a very regular orthography). In the 1st experiment we presented 22 English and 30 Italian readers with 5-letter words using the Rapid Serial Visual Presentation (RSVP) paradigm. In a 2nd experiment, we evaluated a new group of 26 English and 32 Italian proficient readers through the RSVP procedure and lists matched in the two languages for both number of phonemes and letters. The results of the two experiments indicate that English participants read at a similar rate but with much greater individual differences than the Italian participants. In a 3rd experiment, we extended these results to a vocal reaction time (vRT) task, examining the effect of word frequency. An ex-Gaussian distribution analysis revealed differences between languages in the size of the exponential parameter (tau) and in the variance (sigma), but not the mean, of the Gaussian component. Notably, English readers were more variable for both tau and sigma than Italian readers. The pattern of performance in English individuals runs counter to models of performance in timed tasks (Faust et al., 1999; Myerson et al., 2003) which envisage a general relationship between mean performance and variability; indeed, this relationship does not hold in the case of the English participants. The present data highlight the importance of developing reading models that not only capture mean level performance, but also variability across individuals, especially in order to account for cross-linguistic differences in reading behavior. PMID:25191289
First order error corrections in common introductory physics experiments
NASA Astrophysics Data System (ADS)
Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team
As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.
Neumann, M; Friedl, S; Meining, A; Egger, K; Heldwein, W; Rey, J F; Hochberger, J; Classen, M; Hohenberger, W; Rösch, T
2002-10-01
In most European countries, training in GI endoscopy has largely been based on hands-on acquisition of experience in patients rather than on a structured training programme. With the development of training models systematic hands-on training in a variety of diagnostic and therapeutic endoscopy techniques was achieved. Little, however, is known about methods of objectively assessing trainees' performance. We therefore developed an assessment 'score card' for upper GI endoscopy and tested it in endoscopists with various levels of experience. The aim of the study was therefore to assess interobserver variations in the evaluation of trainees. On the basis of textbook and expert opinions a consensus group of eight experienced endoscopists developed a score card for diagnostic upper GI endoscopy with biopsy. The score card includes an assessment of the single steps of the procedure as well as of the times needed to complete each step. This score card was then evaluated in a further conference including ten experts who blindly assessed videotapes of 15 endoscopists performing upper GI endoscopy in a training bio-simulation model (the 'Erlangen Endo-Trainer'). On the basis of their previous experience (i. e. the number of endoscopies performed) these 15 endoscopists were classified into four groups: very experienced, experienced, having some experience and inexperienced. Interobserver variability (IOV) was tested for the various score card parameters (Kendall's rank-correlation coefficient 0.0-0.5 poor, 0.5-1.0 good agreement). In addition, the correlation between the score card assessment and the examiners' experience levels was analysed. Despite poor IOV results for all the parameters tested (Kendall coefficient < 0.3), the assessment parameters correlated well when the examiners' different experience levels were taken into account (correlation coefficient 0.59-0.89, p < 0.05). The score card parameters were suitable for differentiating between the four groups of examiners with different levels of endoscopic experience. As expected with scores involving subjective assessment of performance, the variability between reviewers was substantial. Nevertheless, the assessment score was capable of distinguishing reliably between different experience levels in terms of a good individual observer consistency. The score card can therefore be used to document both training status and progress during endoscopy training courses using bio-simulation models, and this might be able to provide improved quality assurance in GI endoscopy training.
Modeling of NASA's 30/20 GHz satellite communications system
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Maples, B. W.; Stevens, G. A.
1984-01-01
NASA is in the process of developing technology for a 30/20 GHz satellite communications link. Currently hardware is being assembled for a test transponder. A simulation package is being developed to study the link performance in the presence of interference and noise. This requires developing models for the components of the system. This paper describes techniques used to model the components for which data is available. Results of experiments performed using these models are described. A brief overview of NASA's 30/20 GHz communications satellite program is also included.
Dynamics and control simulation of the Spacelab Experiment Pointing Mount
NASA Technical Reports Server (NTRS)
Marsh, E. L.; Ward, R. S.
1977-01-01
Computer simulations were developed to evaluate the performance of four Experiment Pointing Mounts (EPM) being considered for Spacelab experiments in the 1980-1990 time frame. The system modeled compromises a multibody system consisting of the shuttle, a mechanical isolation device, the EPM, celestial and inertial sensors, bearings, gimbal torque motors and associated nonlinearities, the experiment payload, and control and estimator algorithms. Each mount was subjected to a common disturbance (shuttle vernier thruster firing and man push off) and command (stellar pointing or solar raster scan) input. The fundamental limitation common to all mounts was found to be sensor noise. System dynamics and hardware nonlinearities have secondary effects on pointing performance for sufficiently high bandwidth.
2010-01-01
Background The Canadian healthcare system is currently experiencing important organizational transformations through the reform of primary healthcare (PHC). These reforms vary in scope but share a common feature of proposing the transformation of PHC organizations by implementing new models of PHC organization. These models vary in their performance with respect to client affiliation, utilization of services, experience of care and perceived outcomes of care. Objectives In early 2005 we conducted a study in the two most populous regions of Quebec province (Montreal and Montérégie) which assessed the association between prevailing models of primary healthcare (PHC) and population-level experience of care. The goal of the present research project is to track the evolution of PHC organizational models and their relative performance through the reform process (from 2005 until 2010) and to assess factors at the organizational and contextual levels that are associated with the transformation of PHC organizations and their performance. Methods/Design This study will consist of three interrelated surveys, hierarchically nested. The first survey is a population-based survey of randomly-selected adults from two populous regions in the province of Quebec. This survey will assess the current affiliation of people with PHC organizations, their level of utilization of healthcare services, attributes of their experience of care, reception of preventive and curative services and perception of unmet needs for care. The second survey is an organizational survey of PHC organizations assessing aspects related to their vision, organizational structure, level of resources, and clinical practice characteristics. This information will serve to develop a taxonomy of organizations using a mixed methods approach of factorial analysis and principal component analysis. The third survey is an assessment of the organizational context in which PHC organizations are evolving. The five year prospective period will serve as a natural experiment to assess contextual and organizational factors (in 2005) associated with migration of PHC organizational models into new forms or models (in 2010) and assess the impact of this evolution on the performance of PHC. Discussion The results of this study will shed light on changes brought about in the organization of PHC and on factors associated with these changes. PMID:21122145
Levesque, Jean-Frédéric; Pineault, Raynald; Provost, Sylvie; Tousignant, Pierre; Couture, Audrey; Da Silva, Roxane Borgès; Breton, Mylaine
2010-12-01
The Canadian healthcare system is currently experiencing important organizational transformations through the reform of primary healthcare (PHC). These reforms vary in scope but share a common feature of proposing the transformation of PHC organizations by implementing new models of PHC organization. These models vary in their performance with respect to client affiliation, utilization of services, experience of care and perceived outcomes of care. In early 2005 we conducted a study in the two most populous regions of Quebec province (Montreal and Montérégie) which assessed the association between prevailing models of primary healthcare (PHC) and population-level experience of care. The goal of the present research project is to track the evolution of PHC organizational models and their relative performance through the reform process (from 2005 until 2010) and to assess factors at the organizational and contextual levels that are associated with the transformation of PHC organizations and their performance. This study will consist of three interrelated surveys, hierarchically nested. The first survey is a population-based survey of randomly-selected adults from two populous regions in the province of Quebec. This survey will assess the current affiliation of people with PHC organizations, their level of utilization of healthcare services, attributes of their experience of care, reception of preventive and curative services and perception of unmet needs for care. The second survey is an organizational survey of PHC organizations assessing aspects related to their vision, organizational structure, level of resources, and clinical practice characteristics. This information will serve to develop a taxonomy of organizations using a mixed methods approach of factorial analysis and principal component analysis. The third survey is an assessment of the organizational context in which PHC organizations are evolving. The five year prospective period will serve as a natural experiment to assess contextual and organizational factors (in 2005) associated with migration of PHC organizational models into new forms or models (in 2010) and assess the impact of this evolution on the performance of PHC. The results of this study will shed light on changes brought about in the organization of PHC and on factors associated with these changes.
An investigation of FeCrAl cladding behavior under normal operating and loss of coolant conditions
Gamble, Kyle A.; Barani, Tommaso; Pizzocri, David; ...
2017-04-30
Iron-chromium-aluminum (FeCrAl) alloys are candidates to be used as nuclear fuel cladding for increased accident tolerance. An analysis of the response of FeCrAl under normal operating and loss of coolant conditions has been performed using fuel performance modeling. In particular, recent information on FeCrAl material properties and phenomena from separate effects tests has been implemented in the BISON fuel performance code and analyses of integral fuel rod behavior with FeCrAl cladding have been performed. BISON simulations included both light water reactor normal operation and loss-of-coolant accidental transients. In order to model fuel rod behavior during accidents, a cladding failure criterionmore » is desirable. For FeCrAl alloys, a failure criterion is developed using recent burst experiments under loss of coolant like conditions. The added material models are utilized to perform comparative studies with Zircaloy-4 under normal operating conditions and oxidizing and non-oxidizing out-of-pile loss of coolant conditions. The results indicate that for all conditions studied, FeCrAl behaves similarly to Zircaloy-4 with the exception of improved oxidation performance. Here, further experiments are required to confirm these observations.« less
Analysis of a Rocket Based Combined Cycle Engine during Rocket Only Operation
NASA Technical Reports Server (NTRS)
Smith, T. D.; Steffen, C. J., Jr.; Yungster, S.; Keller, D. J.
1998-01-01
The all rocket mode of operation is a critical factor in the overall performance of a rocket based combined cycle (RBCC) vehicle. However, outside of performing experiments or a full three dimensional analysis, there are no first order parametric models to estimate performance. As a result, an axisymmetric RBCC engine was used to analytically determine specific impulse efficiency values based upon both full flow and gas generator configurations. Design of experiments methodology was used to construct a test matrix and statistical regression analysis was used to build parametric models. The main parameters investigated in this study were: rocket chamber pressure, rocket exit area ratio, percent of injected secondary flow, mixer-ejector inlet area, mixer-ejector area ratio, and mixer-ejector length-to-inject diameter ratio. A perfect gas computational fluid dynamics analysis was performed to obtain values of vacuum specific impulse. Statistical regression analysis was performed based on both full flow and gas generator engine cycles. Results were also found to be dependent upon the entire cycle assumptions. The statistical regression analysis determined that there were five significant linear effects, six interactions, and one second-order effect. Two parametric models were created to provide performance assessments of an RBCC engine in the all rocket mode of operation.
An investigation of FeCrAl cladding behavior under normal operating and loss of coolant conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamble, Kyle A.; Barani, Tommaso; Pizzocri, David
Iron-chromium-aluminum (FeCrAl) alloys are candidates to be used as nuclear fuel cladding for increased accident tolerance. An analysis of the response of FeCrAl under normal operating and loss of coolant conditions has been performed using fuel performance modeling. In particular, recent information on FeCrAl material properties and phenomena from separate effects tests has been implemented in the BISON fuel performance code and analyses of integral fuel rod behavior with FeCrAl cladding have been performed. BISON simulations included both light water reactor normal operation and loss-of-coolant accidental transients. In order to model fuel rod behavior during accidents, a cladding failure criterionmore » is desirable. For FeCrAl alloys, a failure criterion is developed using recent burst experiments under loss of coolant like conditions. The added material models are utilized to perform comparative studies with Zircaloy-4 under normal operating conditions and oxidizing and non-oxidizing out-of-pile loss of coolant conditions. The results indicate that for all conditions studied, FeCrAl behaves similarly to Zircaloy-4 with the exception of improved oxidation performance. Here, further experiments are required to confirm these observations.« less
Moorthy, Arun S; Eberl, Hermann J
2014-04-01
Fermentation reactor systems are a key platform in studying intestinal microflora, specifically with respect to questions surrounding the effects of diet. In this study, we develop computational representations of colon fermentation reactor systems as a way to assess the influence of three design elements (number of reactors, emptying mechanism, and inclusion of microbial immobilization) on three performance measures (total biomass density, biomass composition, and fibre digestion efficiency) using a fractional-factorial experimental design. It was determined that the choice of emptying mechanism showed no effect on any of the performance measures. Additionally, it was determined that none of the design criteria had any measurable effect on reactor performance with respect to biomass composition. It is recommended that model fermentation systems used in the experimenting of dietary effects on intestinal biomass composition be streamlined to only include necessary system design complexities, as the measured performance is not benefited by the addition of microbial immobilization mechanisms or semi-continuous emptying scheme. Additionally, the added complexities significantly increase computational time during simulation experiments. It was also noted that the same factorial experiment could be directly adapted using in vitro colon fermentation systems. Copyright © 2013 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling
Bacelli, Giorgio; Coe, Ryan; Patterson, David; ...
2017-04-01
Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less
System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacelli, Giorgio; Coe, Ryan; Patterson, David
Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less
Stochastic model predicts evolving preferences in the Iowa gambling task
Fuentes, Miguel A.; Lavín, Claudio; Contreras-Huerta, L. Sebastián; Miguel, Hernan; Rosales Jubal, Eduardo
2014-01-01
Learning under uncertainty is a common task that people face in their daily life. This process relies on the cognitive ability to adjust behavior to environmental demands. Although the biological underpinnings of those cognitive processes have been extensively studied, there has been little work in formal models seeking to capture the fundamental dynamic of learning under uncertainty. In the present work, we aimed to understand the basic cognitive mechanisms of outcome processing involved in decisions under uncertainty and to evaluate the relevance of previous experiences in enhancing learning processes within such uncertain context. We propose a formal model that emulates the behavior of people playing a well established paradigm (Iowa Gambling Task - IGT) and compare its outcome with a behavioral experiment. We further explored whether it was possible to emulate maladaptive behavior observed in clinical samples by modifying the model parameter which controls the update of expected outcomes distributions. Results showed that the performance of the model resembles the observed participant performance as well as IGT performance by healthy subjects described in the literature. Interestingly, the model converges faster than some subjects on the decks with higher net expected outcome. Furthermore, the modified version of the model replicated the trend observed in clinical samples performing the task. We argue that the basic cognitive component underlying learning under uncertainty can be represented as a differential equation that considers the outcomes of previous decisions for guiding the agent to an adaptive strategy. PMID:25566043
Stochastic model predicts evolving preferences in the Iowa gambling task.
Fuentes, Miguel A; Lavín, Claudio; Contreras-Huerta, L Sebastián; Miguel, Hernan; Rosales Jubal, Eduardo
2014-01-01
Learning under uncertainty is a common task that people face in their daily life. This process relies on the cognitive ability to adjust behavior to environmental demands. Although the biological underpinnings of those cognitive processes have been extensively studied, there has been little work in formal models seeking to capture the fundamental dynamic of learning under uncertainty. In the present work, we aimed to understand the basic cognitive mechanisms of outcome processing involved in decisions under uncertainty and to evaluate the relevance of previous experiences in enhancing learning processes within such uncertain context. We propose a formal model that emulates the behavior of people playing a well established paradigm (Iowa Gambling Task - IGT) and compare its outcome with a behavioral experiment. We further explored whether it was possible to emulate maladaptive behavior observed in clinical samples by modifying the model parameter which controls the update of expected outcomes distributions. Results showed that the performance of the model resembles the observed participant performance as well as IGT performance by healthy subjects described in the literature. Interestingly, the model converges faster than some subjects on the decks with higher net expected outcome. Furthermore, the modified version of the model replicated the trend observed in clinical samples performing the task. We argue that the basic cognitive component underlying learning under uncertainty can be represented as a differential equation that considers the outcomes of previous decisions for guiding the agent to an adaptive strategy.
The Capillary Flow Experiments Aboard the International Space Station: Increments 9-15
NASA Technical Reports Server (NTRS)
Jenson, Ryan M.; Weislogel, Mark M.; Tavan, Noel T.; Chen, Yongkang; Semerjian, Ben; Bunnell, Charles T.; Collicott, Steven H.; Klatte, Jorg; dreyer, Michael E.
2009-01-01
This report provides a summary of the experimental, analytical, and numerical results of the Capillary Flow Experiment (CFE) performed aboard the International Space Station (ISS). The experiments were conducted in space beginning with Increment 9 through Increment 16, beginning August 2004 and ending December 2007. Both primary and extra science experiments were conducted during 19 operations performed by 7 astronauts including: M. Fincke, W. McArthur, J. Williams, S. Williams, M. Lopez-Alegria, C. Anderson, and P. Whitson. CFE consists of 6 approximately 1 to 2 kg handheld experiment units designed to investigate a selection of capillary phenomena of fundamental and applied importance, such as large length scale contact line dynamics (CFE-Contact Line), critical wetting in discontinuous structures (CFE-Vane Gap), and capillary flows and passive phase separations in complex containers (CFE-Interior Corner Flow). Highly quantitative video from the simply performed flight experiments provide data helpful in benchmarking numerical methods, confirming theoretical models, and guiding new model development. In an extensive executive summary, a brief history of the experiment is reviewed before introducing the science investigated. A selection of experimental results and comparisons with both analytic and numerical predictions is given. The subsequent chapters provide additional details of the experimental and analytical methods developed and employed. These include current presentations of the state of the data reduction which we anticipate will continue throughout the year and culminate in several more publications. An extensive appendix is used to provide support material such as an experiment history, dissemination items to date (CFE publication, etc.), detailed design drawings, and crew procedures. Despite the simple nature of the experiments and procedures, many of the experimental results may be practically employed to enhance the design of spacecraft engineering systems involving capillary interface dynamics.
Seth, Ajay; Sherman, Michael; Reinbolt, Jeffrey A; Delp, Scott L
Movement science is driven by observation, but observation alone cannot elucidate principles of human and animal movement. Biomechanical modeling and computer simulation complement observations and inform experimental design. Biological models are complex and specialized software is required for building, validating, and studying them. Furthermore, common access is needed so that investigators can contribute models to a broader community and leverage past work. We are developing OpenSim, a freely available musculoskeletal modeling and simulation application and libraries specialized for these purposes, by providing: musculoskeletal modeling elements, such as biomechanical joints, muscle actuators, ligament forces, compliant contact, and controllers; and tools for fitting generic models to subject-specific data, performing inverse kinematics and forward dynamic simulations. OpenSim performs an array of physics-based analyses to delve into the behavior of musculoskeletal models by employing Simbody, an efficient and accurate multibody system dynamics code. Models are publicly available and are often reused for multiple investigations because they provide a rich set of behaviors that enables different lines of inquiry. This report will discuss one model developed to study walking and applied to gain deeper insights into muscle function in pathological gait and during running. We then illustrate how simulations can test fundamental hypotheses and focus the aims of in vivo experiments, with a postural stability platform and human model that provide a research environment for performing human posture experiments in silico . We encourage wide adoption of OpenSim for community exchange of biomechanical models and methods and welcome new contributors.
Feature inference with uncertain categorization: Re-assessing Anderson's rational model.
Konovalova, Elizaveta; Le Mens, Gaël
2017-09-18
A key function of categories is to help predictions about unobserved features of objects. At the same time, humans are often in situations where the categories of the objects they perceive are uncertain. In an influential paper, Anderson (Psychological Review, 98(3), 409-429, 1991) proposed a rational model for feature inferences with uncertain categorization. A crucial feature of this model is the conditional independence assumption-it assumes that the within category feature correlation is zero. In prior research, this model has been found to provide a poor fit to participants' inferences. This evidence is restricted to task environments inconsistent with the conditional independence assumption. Currently available evidence thus provides little information about how this model would fit participants' inferences in a setting with conditional independence. In four experiments based on a novel paradigm and one experiment based on an existing paradigm, we assess the performance of Anderson's model under conditional independence. We find that this model predicts participants' inferences better than competing models. One model assumes that inferences are based on just the most likely category. The second model is insensitive to categories but sensitive to overall feature correlation. The performance of Anderson's model is evidence that inferences were influenced not only by the more likely category but also by the other candidate category. Our findings suggest that a version of Anderson's model which relaxes the conditional independence assumption will likely perform well in environments characterized by within-category feature correlation.
Similarity Theory of Withdrawn Water Temperature Experiment
2015-01-01
Selective withdrawal from a thermal stratified reservoir has been widely utilized in managing reservoir water withdrawal. Besides theoretical analysis and numerical simulation, model test was also necessary in studying the temperature of withdrawn water. However, information on the similarity theory of the withdrawn water temperature model remains lacking. Considering flow features of selective withdrawal, the similarity theory of the withdrawn water temperature model was analyzed theoretically based on the modification of governing equations, the Boussinesq approximation, and some simplifications. The similarity conditions between the model and the prototype were suggested. The conversion of withdrawn water temperature between the model and the prototype was proposed. Meanwhile, the fundamental theory of temperature distribution conversion was firstly proposed, which could significantly improve the experiment efficiency when the basic temperature of the model was different from the prototype. Based on the similarity theory, an experiment was performed on the withdrawn water temperature which was verified by numerical method. PMID:26065020
SUMMA and Model Mimicry: Understanding Differences Among Land Models
NASA Astrophysics Data System (ADS)
Nijssen, B.; Nearing, G. S.; Ou, G.; Clark, M. P.
2016-12-01
Model inter-comparison and model ensemble experiments suffer from an inability to explain the mechanisms behind differences in model outcomes. We can clearly demonstrate that the models are different, but we cannot necessarily identify the reasons why, because most models exhibit myriad differences in process representations, model parameterizations, model parameters and numerical solution methods. This inability to identify the reasons for differences in model performance hampers our understanding and limits model improvement, because we cannot easily identify the most promising paths forward. We have developed the Structure for Unifying Multiple Modeling Alternatives (SUMMA) to allow for controlled experimentation with model construction, numerical techniques, and parameter values and therefore isolate differences in model outcomes to specific choices during the model development process. In developing SUMMA, we recognized that hydrologic models can be thought of as individual instantiations of a master modeling template that is based on a common set of conservation equations for energy and water. Given this perspective, SUMMA provides a unified approach to hydrologic modeling that integrates different modeling methods into a consistent structure with the ability to instantiate alternative hydrologic models at runtime. Here we employ SUMMA to revisit a previous multi-model experiment and demonstrate its use for understanding differences in model performance. Specifically, we implement SUMMA to mimic the spread of behaviors exhibited by the land models that participated in the Protocol for the Analysis of Land Surface Models (PALS) Land Surface Model Benchmarking Evaluation Project (PLUMBER) and draw conclusions about the relative performance of specific model parameterizations for water and energy fluxes through the soil-vegetation continuum. SUMMA's ability to mimic the spread of model ensembles and the behavior of individual models can be an important tool in focusing model development and improvement efforts.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
A Model for Developing Meta-Cognitive Tools in Teacher Apprenticeships
ERIC Educational Resources Information Center
Bray, Paige; Schatz, Steven
2013-01-01
This research investigates a model for developing meta-cognitive tools to be used by pre-service teachers during apprenticeship (student teaching) experience to operationalise the epistemological model of Cook and Brown (2009). Meta-cognitive tools have proven to be effective for increasing performance and retention of undergraduate students.…
A Comprehensive Expectancy Motivation Model: Implications for Adult Education and Training.
ERIC Educational Resources Information Center
Howard, Kenneth W.
1989-01-01
The Comprehensive Expectancy Motivation Model is based on valence-instrumentality-expectancy theory. It describes expectancy motivation as part of a larger process that includes past experience, motivation, effort, performance, reward, and need satisfaction. The model has significant implications for the design, marketing, and delivery of adult…
Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines
NASA Astrophysics Data System (ADS)
Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.
2016-12-01
Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
Modeling the Nab Experiment Electronics in SPICE
NASA Astrophysics Data System (ADS)
Blose, Alexander; Crawford, Christopher; Sprow, Aaron; Nab Collaboration
2017-09-01
The goal of the Nab experiment is to measure the neutron decay coefficients a, the electron-neutrino correlation, as well as b, the Fierz interference term to precisely test the Standard Model, as well as probe for Beyond the Standard Model physics. In this experiment, protons from the beta decay of the neutron are guided through a magnetic field into a Silicon detector. Event reconstruction will be achieved via time-of-flight measurement for the proton and direct measurement of the coincident electron energy in highly segmented silicon detectors, so the amplification circuitry needs to preserve fast timing, provide good amplitude resolution, and be packaged in a high-density format. We have designed a SPICE simulation to model the full electronics chain for the Nab experiment in order to understand the contributions of each stage and optimize them for performance. Additionally, analytic solutions to each of the components have been determined where available. We will present a comparison of the output from the SPICE model, analytic solution, and empirically determined data.
Tilmes, S.; Mills, Mike; Niemeier, Ulrike; ...
2015-01-15
A new Geoengineering Model Intercomparison Project (GeoMIP) experiment "G4 specified stratospheric aerosols" (short name: G4SSA) is proposed to investigate the impact of stratospheric aerosol geoengineering on atmosphere, chemistry, dynamics, climate, and the environment. In contrast to the earlier G4 GeoMIP experiment, which requires an emission of sulfur dioxide (SO₂) into the model, a prescribed aerosol forcing file is provided to the community, to be consistently applied to future model experiments between 2020 and 2100. This stratospheric aerosol distribution, with a total burden of about 2 Tg S has been derived using the ECHAM5-HAM microphysical model, based on a continuous annualmore » tropical emission of 8 Tg SO₂ yr⁻¹. A ramp-up of geoengineering in 2020 and a ramp-down in 2070 over a period of 2 years are included in the distribution, while a background aerosol burden should be used for the last 3 decades of the experiment. The performance of this experiment using climate and chemistry models in a multi-model comparison framework will allow us to better understand the impact of geoengineering and its abrupt termination after 50 years in a changing environment. The zonal and monthly mean stratospheric aerosol input data set is available at https://www2.acd.ucar.edu/gcm/geomip-g4-specified-stratospheric-aerosol-data-set.« less
Food Cravings Consume Limited Cognitive Resources
ERIC Educational Resources Information Center
Kemps, Eva; Tiggemann, Marika; Grigg, Megan
2008-01-01
Using Tiffany's (1990) cognitive model of drug use and craving as a theoretical basis, the present experiments investigated whether cravings for food expend limited cognitive resources. Cognitive performance was assessed by simple reaction time (Experiment 1) and an established measure of working memory capacity, the operation span task…
Retrospective Attention Interacts with Stimulus Strength to Shape Working Memory Performance.
Wildegger, Theresa; Humphreys, Glyn; Nobre, Anna C
2016-01-01
Orienting attention retrospectively to selective contents in working memory (WM) influences performance. A separate line of research has shown that stimulus strength shapes perceptual representations. There is little research on how stimulus strength during encoding shapes WM performance, and how effects of retrospective orienting might vary with changes in stimulus strength. We explore these questions in three experiments using a continuous-recall WM task. In Experiment 1 we show that benefits of cueing spatial attention retrospectively during WM maintenance (retrocueing) varies according to stimulus contrast during encoding. Retrocueing effects emerge for supraliminal but not sub-threshold stimuli. However, once stimuli are supraliminal, performance is no longer influenced by stimulus contrast. In Experiments 2 and 3 we used a mixture-model approach to examine how different sources of error in WM are affected by contrast and retrocueing. For high-contrast stimuli (Experiment 2), retrocues increased the precision of successfully remembered items. For low-contrast stimuli (Experiment 3), retrocues decreased the probability of mistaking a target with distracters. These results suggest that the processes by which retrospective attentional orienting shape WM performance are dependent on the quality of WM representations, which in turn depends on stimulus strength during encoding.
A flexible influence of affective feelings on creative and analytic performance.
Huntsinger, Jeffrey R; Ray, Cara
2016-09-01
Considerable research shows that positive affect improves performance on creative tasks and negative affect improves performance on analytic tasks. The present research entertained the idea that affective feelings have flexible, rather than fixed, effects on cognitive performance. Consistent with the idea that positive and negative affect signal the value of accessible processing inclinations, the influence of affective feelings on performance on analytic or creative tasks was found to be flexibly responsive to the relative accessibility of different styles of processing (i.e., heuristic vs. systematic, global vs. local). When a global processing orientation was accessible happy participants generated more creative uses for a brick (Experiment 1), successfully solved more remote associates and insight problems (Experiment 2) and displayed broader categorization (Experiment 3) than those in sad moods. When a local processing orientation was accessible this pattern reversed. When a heuristic processing style was accessible happy participants were more likely to commit the conjunction fallacy (Experiment 3) and showed less pronounced anchoring effects (Experiment 4) than sad participants. When a systematic processing style was accessible this pattern reversed. Implications of these results for relevant affect-cognition models are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Linear model for fast background subtraction in oligonucleotide microarrays.
Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico
2009-11-16
One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.
Wang, Ruifei; Koppram, Rakesh; Olsson, Lisbeth; Franzén, Carl Johan
2014-11-01
Fed-batch simultaneous saccharification and fermentation (SSF) is a feasible option for bioethanol production from lignocellulosic raw materials at high substrate concentrations. In this work, a segregated kinetic model was developed for simulation of fed-batch simultaneous saccharification and co-fermentation (SSCF) of steam-pretreated birch, using substrate, enzymes and cell feeds. The model takes into account the dynamics of the cellulase-cellulose system and the cell population during SSCF, and the effects of pre-cultivation of yeast cells on fermentation performance. The model was cross-validated against experiments using different feed schemes. It could predict fermentation performance and explain observed differences between measured total yeast cells and dividing cells very well. The reproducibility of the experiments and the cell viability were significantly better in fed-batch than in batch SSCF at 15% and 20% total WIS contents. The model can be used for simulation of fed-batch SSCF and optimization of feed profiles. Copyright © 2014 Elsevier Ltd. All rights reserved.
Schulz, Christian M; Schneider, Erich; Kohlbecher, Stefan; Hapfelmeier, Alexander; Heuser, Fabian; Wagner, Klaus J; Kochs, Eberhard F; Schneider, Gerhard
2014-10-01
Development of accurate Situation Awareness (SA) depends on experience and may be impaired during excessive workload. In order to gain adequate SA for decision making and performance, anaesthetists need to distribute visual attention effectively. Therefore, we hypothesized that in more experienced anaesthetists performance is better and increase of physiological workload is less during critical incidents. Additionally, we investigated the relation between physiological workload indicators and distribution of visual attention. In fifteen anaesthetists, the increase of pupil size and heart rate was assessed in course of a simulated critical incident. Simulator log files were used for performance assessment. An eye-tracking device (EyeSeeCam) provided data about the anaesthetists' distribution of visual attention. Performance was assessed as time until definitive treatment. T tests and multivariate generalized linear models (MANOVA) were used for retrospective statistical analysis. Mean pupil diameter increase was 8.1% (SD ± 4.3) in the less experienced and 15.8% (±10.4) in the more experienced subjects (p = 0.191). Mean heart rate increase was 10.2% (±6.7) and 10.5% (±8.3, p = 0.956), respectively. Performance did not depend on experience. Pupil diameter and heart rate increases were associated with a shift of visual attention from monitoring towards manual tasks (not significant). For the first time, the following four variables were assessed simultaneously: physiological workload indicators, performance, experience, and distribution of visual attention between "monitoring" and "manual" tasks. However, we were unable to detect significant interactions between these variables. This experimental model could prove valuable in the investigation of gaining and maintaining SA in the operation theatre.
Loss of feed flow, steam generator tube rupture and steam line break thermohydraulic experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendler, O J; Takeuchi, K; Young, M Y
1986-10-01
The Westinghouse Model Boiler No. 2 (MB-2) steam generator test model at the Engineering Test Facility in Tampa, Florida, was reinstrumented and modified for performing a series of tests simulating steam generator accident transients. The transients simulated were: loss of feed flow, steam generator tube rupture, and steam line break events. This document presents a description of (1) the model boiler and the associated test facility, (2) the tests performed, and (3) the analyses of the test results.
Framework for assessing key variable dependencies in loose-abrasive grinding and polishing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.S.; Aikens, D.M.; Brown, N.J.
1995-12-01
This memo describes a framework for identifying all key variables that determine the figuring performance of loose-abrasive lapping and polishing machines. This framework is intended as a tool for prioritizing R&D issues, assessing the completeness of process models and experimental data, and for providing a mechanism to identify any assumptions in analytical models or experimental procedures. Future plans for preparing analytical models or performing experiments can refer to this framework in establishing the context of the work.
Modeling of impulsive propellant reorientation
NASA Technical Reports Server (NTRS)
Hochstein, John I.; Patag, Alfredo E.; Chato, David J.
1988-01-01
The impulsive propellant reorientation process is modeled using the (Energy Calculations for Liquid Propellants in a Space Environment (ECLIPSE) code. A brief description of the process and the computational model is presented. Code validation is documented via comparison to experimentally derived data for small scale tanks. Predictions of reorientation performance are presented for two tanks designed for use in flight experiments and for a proposed full scale OTV tank. A new dimensionless parameter is developed to correlate reorientation performance in geometrically similar tanks. Its success is demonstrated.
NASA Technical Reports Server (NTRS)
Nowinski, Jessica Lang; Dismukes, Key R.
2005-01-01
Two experiments examined whether prospective memory performance is influenced by contextual cues. In our automatic activation model, any information available at encoding and retrieval should aid recall of the prospective task. The first experiment demonstrated an effect of the ongoing task context; performance was better when information about the ongoing task present at retrieval was available at encoding. Performance was also improved by a strong association between the prospective memory target as it was presented at retrieval and the intention as it was encoded. Experiment 2 demonstrated boundary conditions of the ongoing task context effect, which implicate the association between the ongoing and prospective tasks formed at encoding as the source of the context effect. The results of this study are consistent with predictions based on automatic activation of intentions.
ERIC Educational Resources Information Center
Rozell, E. J.; Gardner, W. L., III
1999-01-01
A model of the intrapersonal processes impacting computer-related performance was tested using data from 75 manufacturing employees in a computer training course. Gender, computer experience, and attributional style were predictive of computer attitudes, which were in turn related to computer efficacy, task-specific performance expectations, and…
NASA Astrophysics Data System (ADS)
Echevin, V.; Levy, M.; Memery, L.
The assimilation of two dimensional sea color data fields into a 3 dimensional coupled dynamical-biogeochemical model is performed using a 4DVAR algorithm. The biogeochemical model includes description of nitrates, ammonium, phytoplancton, zooplancton, detritus and dissolved organic matter. A subset of the biogeochemical model poorly known parameters (for example,phytoplancton growth, mortality,grazing) are optimized by minimizing a cost function measuring misfit between the observations and the model trajectory. Twin experiments are performed with an eddy resolving model of 5 km resolution in an academic configuration. Starting from oligotrophic conditions, an initially unstable baroclinic anticyclone splits into several eddies. Strong vertical velocities advect nitrates into the euphotic zone and generate a phytoplancton bloom. Biogeochemical parameters are perturbed to generate surface pseudo-observations of chlorophyll,which are assimilated in the model in order to retrieve the correct parameter perturbations. The impact of the type of measurement (quasi-instantaneous, daily mean, weekly mean) onto the retrieved set of parameters is analysed. Impacts of additional subsurface measurements and of errors in the circulation are also presented.
NASA Astrophysics Data System (ADS)
Šarolić, A.; Živković, Z.; Reilly, J. P.
2016-06-01
The electrostimulation excitation threshold of a nerve depends on temporal and frequency parameters of the stimulus. These dependences were investigated in terms of: (1) strength-duration (SD) curve for a single monophasic rectangular pulse, and (2) frequency dependence of the excitation threshold for a continuous sinusoidal current. Experiments were performed on the single-axon measurement setup based on Lumbricus terrestris having unmyelinated nerve fibers. The simulations were performed using the well-established SENN model for a myelinated nerve. Although the unmyelinated experimental model differs from the myelinated simulation model, both refer to a single axon. Thus we hypothesized that the dependence on temporal and frequency parameters should be very similar. The comparison was made possible by normalizing each set of results to the SD time constant and the rheobase current of each model, yielding the curves that show the temporal and frequency dependencies regardless of the model differences. The results reasonably agree, suggesting that this experimental setup and method of comparison with SENN model can be used for further studies of waveform effect on nerve excitability, including unmyelinated neurons.
Šarolić, A; Živković, Z; Reilly, J P
2016-06-21
The electrostimulation excitation threshold of a nerve depends on temporal and frequency parameters of the stimulus. These dependences were investigated in terms of: (1) strength-duration (SD) curve for a single monophasic rectangular pulse, and (2) frequency dependence of the excitation threshold for a continuous sinusoidal current. Experiments were performed on the single-axon measurement setup based on Lumbricus terrestris having unmyelinated nerve fibers. The simulations were performed using the well-established SENN model for a myelinated nerve. Although the unmyelinated experimental model differs from the myelinated simulation model, both refer to a single axon. Thus we hypothesized that the dependence on temporal and frequency parameters should be very similar. The comparison was made possible by normalizing each set of results to the SD time constant and the rheobase current of each model, yielding the curves that show the temporal and frequency dependencies regardless of the model differences. The results reasonably agree, suggesting that this experimental setup and method of comparison with SENN model can be used for further studies of waveform effect on nerve excitability, including unmyelinated neurons.
NASA Technical Reports Server (NTRS)
Saha, Hrishikesh; Palmer, Timothy A.
1996-01-01
Virtual Reality Lab Assistant (VRLA) demonstration model is aligned for engineering and material science experiments to be performed by undergraduate and graduate students in the course as a pre-lab simulation experience. This will help students to get a preview of how to use the lab equipment and run experiments without using the lab hardware/software equipment. The quality of the time available for laboratory experiments can be significantly improved through the use of virtual reality technology.
Limb radiance inversion radiometer. [Nimbus 6 satellite
NASA Technical Reports Server (NTRS)
Drozewski, R. W.; Gille, J. C.; Thomas, J. R.; Twohig, K. J.; Boyle, R. R.
1975-01-01
Engineering and scientific objectives of the LRIR experiment are described along with system requirements, subassemblies, and experiment operation. The mechanical, electrical, and thermal interfaces between the LRIR experiment and the Nimbus F spacecraft are defined. The protoflight model qualification and acceptance test program is summarized. Test data is presented in tables to give an overall view of each test parameter and possible trends of the performance of the LRIR experiment. Conclusions and recommendations are included.
Performance of GeantV EM Physics Models
NASA Astrophysics Data System (ADS)
Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.
2017-10-01
The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.
NASA Technical Reports Server (NTRS)
Ahmad, Anees
1990-01-01
The development of in-house integrated optical performance modelling capability at MSFC is described. This performance model will take into account the effects of structural and thermal distortions, as well as metrology errors in optical surfaces to predict the performance of large an complex optical systems, such as Advanced X-Ray Astrophysics Facility. The necessary hardware and software were identified to implement an integrated optical performance model. A number of design, development, and testing tasks were supported to identify the debonded mirror pad, and rebuilding of the Technology Mirror Assembly. Over 300 samples of Zerodur were prepared in different sizes and shapes for acid etching, coating, and polishing experiments to characterize the subsurface damage and stresses produced by the grinding and polishing operations.
Witzenburg, Colleen M.; Dhume, Rohit Y.; Shah, Sachin B.; Korenczuk, Christopher E.; Wagner, Hallie P.; Alford, Patrick W.; Barocas, Victor H.
2017-01-01
The ascending thoracic aorta is poorly understood mechanically, especially its risk of dissection. To make better predictions of dissection risk, more information about the multidimensional failure behavior of the tissue is needed, and this information must be incorporated into an appropriate theoretical/computational model. Toward the creation of such a model, uniaxial, equibiaxial, peel, and shear lap tests were performed on healthy porcine ascending aorta samples. Uniaxial and equibiaxial tests showed anisotropy with greater stiffness and strength in the circumferential direction. Shear lap tests showed catastrophic failure at shear stresses (150–200 kPa) much lower than uniaxial tests (750–2500 kPa), consistent with the low peel tension (∼60 mN/mm). A novel multiscale computational model, including both prefailure and failure mechanics of the aorta, was developed. The microstructural part of the model included contributions from a collagen-reinforced elastin sheet and interlamellar connections representing fibrillin and smooth muscle. Components were represented as nonlinear fibers that failed at a critical stretch. Multiscale simulations of the different experiments were performed, and the model, appropriately specified, agreed well with all experimental data, representing a uniquely complete structure-based description of aorta mechanics. In addition, our experiments and model demonstrate the very low strength of the aorta in radial shear, suggesting an important possible mechanism for aortic dissection. PMID:27893044
NASA Astrophysics Data System (ADS)
Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.
2010-12-01
Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.
De Kauwe, Martin G; Medlyn, Belinda E; Zaehle, Sönke; Walker, Anthony P; Dietze, Michael C; Wang, Ying-Ping; Luo, Yiqi; Jain, Atul K; El-Masri, Bassil; Hickler, Thomas; Wårlind, David; Weng, Ensheng; Parton, William J; Thornton, Peter E; Wang, Shusen; Prentice, I Colin; Asao, Shinichi; Smith, Benjamin; McCarthy, Heather R; Iversen, Colleen M; Hanson, Paul J; Warren, Jeffrey M; Oren, Ram; Norby, Richard J
2014-01-01
Elevated atmospheric CO2 concentration (eCO2) has the potential to increase vegetation carbon storage if increased net primary production causes increased long-lived biomass. Model predictions of eCO2 effects on vegetation carbon storage depend on how allocation and turnover processes are represented. We used data from two temperate forest free-air CO2 enrichment (FACE) experiments to evaluate representations of allocation and turnover in 11 ecosystem models. Observed eCO2 effects on allocation were dynamic. Allocation schemes based on functional relationships among biomass fractions that vary with resource availability were best able to capture the general features of the observations. Allocation schemes based on constant fractions or resource limitations performed less well, with some models having unintended outcomes. Few models represent turnover processes mechanistically and there was wide variation in predictions of tissue lifespan. Consequently, models did not perform well at predicting eCO2 effects on vegetation carbon storage. Our recommendations to reduce uncertainty include: use of allocation schemes constrained by biomass fractions; careful testing of allocation schemes; and synthesis of allocation and turnover data in terms of model parameters. Data from intensively studied ecosystem manipulation experiments are invaluable for constraining models and we recommend that such experiments should attempt to fully quantify carbon, water and nutrient budgets. PMID:24844873
Selecting among competing models of electro-optic, infrared camera system range performance
Nichols, Jonathan M.; Hines, James E.; Nichols, James D.
2013-01-01
Range performance is often the key requirement around which electro-optical and infrared camera systems are designed. This work presents an objective framework for evaluating competing range performance models. Model selection based on the Akaike’s Information Criterion (AIC) is presented for the type of data collected during a typical human observer and target identification experiment. These methods are then demonstrated on observer responses to both visible and infrared imagery in which one of three maritime targets was placed at various ranges. We compare the performance of a number of different models, including those appearing previously in the literature. We conclude that our model-based approach offers substantial improvements over the traditional approach to inference, including increased precision and the ability to make predictions for some distances other than the specific set for which experimental trials were conducted.
Malem-Shinitski, Noa; Zhang, Yingzhuo; Gray, Daniel T; Burke, Sara N; Smith, Anne C; Barnes, Carol A; Ba, Demba
2018-04-18
The study of learning in populations of subjects can provide insights into the changes that occur in the brain with aging, drug intervention, and psychiatric disease. We introduce a separable two-dimensional (2D) random field (RF) model for analyzing binary response data acquired during the learning of object-reward associations across multiple days. The method can quantify the variability of performance within a day and across days, and can capture abrupt changes in learning. We apply the method to data from young and aged macaque monkeys performing a reversal-learning task. The method provides an estimate of performance within a day for each age group, and a learning rate across days for each monkey. We find that, as a group, the older monkeys require more trials to learn the object discriminations than do the young monkeys, and that the cognitive flexibility of the younger group is higher. We also use the model estimates of performance as features for clustering the monkeys into two groups. The clustering results in two groups that, for the most part, coincide with those formed by the age groups. Simulation studies suggest that clustering captures inter-individual differences in performance levels. In comparison with generalized linear models, this method is better able to capture the inherent two-dimensional nature of the data and find between group differences. Applied to binary response data from groups of individuals performing multi-day behavioral experiments, the model discriminates between-group differences and identifies subgroups. Copyright © 2018. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Justin; Hund, Lauren
2017-02-01
Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesianmore » model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.« less
User Preference-Based Dual-Memory Neural Model With Memory Consolidation Approach.
Nasir, Jauwairia; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan; Nasir, Jauwairia; Yong-Ho Yoo; Deok-Hwa Kim; Jong-Hwan Kim; Nasir, Jauwairia; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan
2018-06-01
Memory modeling has been a popular topic of research for improving the performance of autonomous agents in cognition related problems. Apart from learning distinct experiences correctly, significant or recurring experiences are expected to be learned better and be retrieved easier. In order to achieve this objective, this paper proposes a user preference-based dual-memory adaptive resonance theory network model, which makes use of a user preference to encode memories with various strengths and to learn and forget at various rates. Over a period of time, memories undergo a consolidation-like process at a rate proportional to the user preference at the time of encoding and the frequency of recall of a particular memory. Consolidated memories are easier to recall and are more stable. This dual-memory neural model generates distinct episodic memories and a flexible semantic-like memory component. This leads to an enhanced retrieval mechanism of experiences through two routes. The simulation results are presented to evaluate the proposed memory model based on various kinds of cues over a number of trials. The experimental results on Mybot are also presented. The results verify that not only are distinct experiences learned correctly but also that experiences associated with higher user preference and recall frequency are consolidated earlier. Thus, these experiences are recalled more easily relative to the unconsolidated experiences.
Bayesian cross-entropy methodology for optimal design of validation experiments
NASA Astrophysics Data System (ADS)
Jiang, X.; Mahadevan, S.
2006-07-01
An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, T.Y.; Bentz, J.; Simpson, R.
1997-02-01
The objective of the Lower Head Failure (LHF) Experiment Program is to experimentally investigate and characterize the failure of the reactor vessel lower head due to thermal and pressure loads under severe accident conditions. The experiment is performed using 1/5-scale models of a typical PWR pressure vessel. Experiments are performed for various internal pressure and imposed heat flux distributions with and without instrumentation guide tube penetrations. The experimental program is complemented by a modest modeling program based on the application of vessel creep rupture codes developed in the TMI Vessel Investigation Project. The first three experiments under the LHF programmore » investigated the creep rupture of simulated reactor pressure vessels without penetrations. The heat flux distributions for the three experiments are uniform (LHF-1), center-peaked (LHF-2), and side-peaked (LHF-3), respectively. For all the experiments, appreciable vessel deformation was observed to initiate at vessel wall temperatures above 900K and the vessel typically failed at approximately 1000K. The size of failure was always observed to be smaller than the heated region. For experiments with non-uniform heat flux distributions, failure typically occurs in the region of peak temperature. A brief discussion of the effect of penetration is also presented.« less
Metal plasticity and ductile fracture modeling for cast aluminum alloy parts
Lee, Jinwoo; Kim, Se-Jong; Park, Hyeonil; ...
2018-01-06
Here in this study, plasticity and ductile fracture properties were characterized by performing various tension, shear, and compression tests. A series of 10 experiments were performed using notched round bars, flat-grooved plates, in-plane shear plates, and cylindrical bars. Two cast aluminum alloys used in automotive suspension systems were selected. Plasticity modelling was performed and the results were compared with experimental and corresponding simulation results; further, the relationships among the stress triaxiality, Lode angle parameter, and equivalent plastic strain at the onset of failure were determined to calibrate a ductile fracture model. Finally, the proposed ductile fracture model shows good agreementmore » with experimental results.« less
Metal plasticity and ductile fracture modeling for cast aluminum alloy parts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jinwoo; Kim, Se-Jong; Park, Hyeonil
Here in this study, plasticity and ductile fracture properties were characterized by performing various tension, shear, and compression tests. A series of 10 experiments were performed using notched round bars, flat-grooved plates, in-plane shear plates, and cylindrical bars. Two cast aluminum alloys used in automotive suspension systems were selected. Plasticity modelling was performed and the results were compared with experimental and corresponding simulation results; further, the relationships among the stress triaxiality, Lode angle parameter, and equivalent plastic strain at the onset of failure were determined to calibrate a ductile fracture model. Finally, the proposed ductile fracture model shows good agreementmore » with experimental results.« less
Experience modulates motor imagery-based brain activity.
Kraeutner, Sarah N; McWhinney, Sean R; Solomon, Jack P; Dithurbide, Lori; Boe, Shaun G
2018-05-01
Whether or not brain activation during motor imagery (MI), the mental rehearsal of movement, is modulated by experience (i.e. skilled performance, achieved through long-term practice) remains unclear. Specifically, MI is generally associated with diffuse activation patterns that closely resemble novice physical performance, which may be attributable to a lack of experience with the task being imagined vs. being a distinguishing feature of MI. We sought to examine how experience modulates brain activity driven via MI, implementing a within- and between-group design to manipulate experience across tasks as well as expertise of the participants. Two groups of 'experts' (basketball/volleyball athletes) and 'novices' (recreational controls) underwent magnetoencephalography (MEG) while performing MI of four multi-articular tasks, selected to ensure that the degree of experience that participants had with each task varied. Source-level analysis was applied to MEG data and linear mixed effects modelling was conducted to examine task-related changes in activity. Within- and between-group comparisons were completed post hoc and difference maps were plotted. Brain activation patterns observed during MI of tasks for which participants had a low degree of experience were more widespread and bilateral (i.e. within-groups), with limited differences observed during MI of tasks for which participants had similar experience (i.e. between-groups). Thus, we show that brain activity during MI is modulated by experience; specifically, that novice performance is associated with the additional recruitment of regions across both hemispheres. Future investigations of the neural correlates of MI should consider prior experience when selecting the task to be performed. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Probing flavor models with ^{ {76}}Ge-based experiments on neutrinoless double-β decay
NASA Astrophysics Data System (ADS)
Agostini, Matteo; Merle, Alexander; Zuber, Kai
2016-04-01
The physics impact of a staged approach for double-β decay experiments based on ^{ {76}}Ge is studied. The scenario considered relies on realistic time schedules envisioned by the Gerda and the Majorana collaborations, which are jointly working towards the realization of a future larger scale ^{ {76}}Ge experiment. Intermediate stages of the experiments are conceived to perform quasi background-free measurements, and different data sets can be reliably combined to maximize the physics outcome. The sensitivity for such a global analysis is presented, with focus on how neutrino flavor models can be probed already with preliminary phases of the experiments. The synergy between theory and experiment yields strong benefits for both sides: the model predictions can be used to sensibly plan the experimental stages, and results from intermediate stages can be used to constrain whole groups of theoretical scenarios. This strategy clearly generates added value to the experimental efforts, while at the same time it allows to achieve valuable physics results as early as possible.
Evaluation of an imputed pitch velocity model of the auditory kappa effect.
Henry, Molly J; McAuley, J Devin
2009-04-01
Three experiments evaluated an imputed pitch velocity model of the auditory kappa effect. Listeners heard 3-tone sequences and judged the timing of the middle (target) tone relative to the timing of the 1st and 3rd (bounding) tones. Experiment 1 held pitch constant but varied the time (T) interval between bounding tones (T = 728, 1,000, or 1,600 ms) in order to establish baseline performance levels for the 3 values of T. Experiments 2 and 3 combined the values of T tested in Experiment 1 with a pitch manipulation in order to create fast (8 semitones/728 ms), medium (8 semitones/1,000 ms), and slow (8 semitones/1,600 ms) velocity conditions. Consistent with an auditory motion hypothesis, distortions in perceived timing were larger for fast than for slow velocity conditions for both ascending sequences (Experiment 2) and descending sequences (Experiment 3). Overall, results supported the proposed imputed pitch velocity model of the auditory kappa effect. (c) 2009 APA, all rights reserved.
NASA Technical Reports Server (NTRS)
Kelly, Jeff; Betts, Juan Fernando; Fuller, Chris
2000-01-01
The study of normal impedance of perforated plate acoustic liners including the effect of bias flow was studied. Two impedance models were developed by modeling the internal flows of perforate orifices as infinite tubes with the inclusion of end corrections to handle finite length effects. These models assumed incompressible and compressible flows, respectively, between the far field and the perforate orifice. The incompressible model was used to predict impedance results for perforated plates with percent open areas ranging from 5% to 15%. The predicted resistance results showed better agreement with experiments for the higher percent open area samples. The agreement also tended to deteriorate as bias flow was increased. For perforated plates with percent open areas ranging from 1% to 5%, the compressible model was used to predict impedance results. The model predictions were closer to the experimental resistance results for the 2% to 3% open area samples. The predictions tended to deteriorate as bias flow was increased. The reactance results were well predicted by the models for the higher percent open area, but deteriorated as the percent open area was lowered (5%) and bias flow was increased. A fit was done on the incompressible model to the experimental database. The fit was performed using an optimization routine that found the optimal set of multiplication coefficients to the non-dimensional groups that minimized the least squares slope error between predictions and experiments. The result of the fit indicated that terms not associated with bias flow required a greater degree of correction than the terms associated with the bias flow. This model improved agreement with experiments by nearly 15% for the low percent open area (5%) samples when compared to the unfitted model. The fitted model and the unfitted model performed equally well for the higher percent open area (10% and 15%).
FDNS CFD Code Benchmark for RBCC Ejector Mode Operation
NASA Technical Reports Server (NTRS)
Holt, James B.; Ruf, Joe
1999-01-01
Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.
Stone, John E.; Hynninen, Antti-Pekka; Phillips, James C.; Schulten, Klaus
2017-01-01
All-atom molecular dynamics simulations of biomolecules provide a powerful tool for exploring the structure and dynamics of large protein complexes within realistic cellular environments. Unfortunately, such simulations are extremely demanding in terms of their computational requirements, and they present many challenges in terms of preparation, simulation methodology, and analysis and visualization of results. We describe our early experiences porting the popular molecular dynamics simulation program NAMD and the simulation preparation, analysis, and visualization tool VMD to GPU-accelerated OpenPOWER hardware platforms. We report our experiences with compiler-provided autovectorization and compare with hand-coded vector intrinsics for the POWER8 CPU. We explore the performance benefits obtained from unique POWER8 architectural features such as 8-way SMT and its value for particular molecular modeling tasks. Finally, we evaluate the performance of several GPU-accelerated molecular modeling kernels and relate them to other hardware platforms. PMID:29202130
Complexities in Ferret Influenza Virus Pathogenesis and Transmission Models
Eckert, Alissa M.; Tumpey, Terrence M.; Maines, Taronna R.
2016-01-01
SUMMARY Ferrets are widely employed to study the pathogenicity, transmissibility, and tropism of influenza viruses. However, inherent variations in inoculation methods, sampling schemes, and experimental designs are often overlooked when contextualizing or aggregating data between laboratories, leading to potential confusion or misinterpretation of results. Here, we provide a comprehensive overview of parameters to consider when planning an experiment using ferrets, collecting data from the experiment, and placing results in context with previously performed studies. This review offers information that is of particular importance for researchers in the field who rely on ferret data but do not perform the experiments themselves. Furthermore, this review highlights the breadth of experimental designs and techniques currently available to study influenza viruses in this model, underscoring the wide heterogeneity of protocols currently used for ferret studies while demonstrating the wealth of information which can benefit risk assessments of emerging influenza viruses. PMID:27412880
Complexities in Ferret Influenza Virus Pathogenesis and Transmission Models.
Belser, Jessica A; Eckert, Alissa M; Tumpey, Terrence M; Maines, Taronna R
2016-09-01
Ferrets are widely employed to study the pathogenicity, transmissibility, and tropism of influenza viruses. However, inherent variations in inoculation methods, sampling schemes, and experimental designs are often overlooked when contextualizing or aggregating data between laboratories, leading to potential confusion or misinterpretation of results. Here, we provide a comprehensive overview of parameters to consider when planning an experiment using ferrets, collecting data from the experiment, and placing results in context with previously performed studies. This review offers information that is of particular importance for researchers in the field who rely on ferret data but do not perform the experiments themselves. Furthermore, this review highlights the breadth of experimental designs and techniques currently available to study influenza viruses in this model, underscoring the wide heterogeneity of protocols currently used for ferret studies while demonstrating the wealth of information which can benefit risk assessments of emerging influenza viruses. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Some advances in experimentation supporting development of viscoplastic constitutive models
NASA Technical Reports Server (NTRS)
Ellis, J. R.; Robinson, D. N.
1985-01-01
The development of a biaxial extensometer capable of measuring axial, torsion, and diametral strains to near-microstrain resolution at elevated temperatures is discussed. An instrument with this capability was needed to provide experimental support to the development of viscoplastic constitutive models. The advantages gained when torsional loading is used to investigate inelastic material response at elevated temperatures are highlighted. The development of the biaxial extensometer was conducted in two stages. The first involved a series of bench calibration experiments performed at room temperature. The second stage involved a series of in-place calibration experiments conducted at room and elevated temperature. A review of the calibration data indicated that all performance requirements regarding resolution, range, stability, and crosstalk had been met by the subject instrument over the temperature range of interest, 21 C to 651 C. The scope of the in-place calibration experiments was expanded to investigate the feasibility of generating stress relaxation data under torsional loading.
Analyses of 1/15 scale Creare bypass transient experiments. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kmetyk, L.N.; Buxton, L.D.; Cole, R.K. Jr.
1982-09-01
RELAP4 analyses of several 1/15 scale Creare H-series bypass transient experiments have been done to investigate the effect of using different downcomer nodalizations, physical scales, slip models, and vapor fraction donoring methods. Most of the analyses were thermal equilibrium calculations performed with RELAP4/MOD5, but a few such calculations were done with RELAP4/MOD6 and RELAP4/MOD7, which contain improved slip models. In order to estimate the importance of nonequilibrium effects, additional analyses were performed with TRAC-PD2, RELAP5 and the nonequilibrium option of RELAP4/MOD7. The purpose of these studies was to determine whether results from Westinghouse's calculation of the Creare experiments, which weremore » done with a UHI-modified version of SATAN, were sufficient to guarantee SATAN would be conservative with respect to ECC bypass in full-scale plant analyses.« less
Computer code for analyzing the performance of aquifer thermal energy storage systems
NASA Astrophysics Data System (ADS)
Vail, L. W.; Kincaid, C. T.; Kannberg, L. D.
1985-05-01
A code called Aquifer Thermal Energy Storage System Simulator (ATESSS) has been developed to analyze the operational performance of ATES systems. The ATESSS code provides an ability to examine the interrelationships among design specifications, general operational strategies, and unpredictable variations in the demand for energy. The uses of the code can vary the well field layout, heat exchanger size, and pumping/injection schedule. Unpredictable aspects of supply and demand may also be examined through the use of a stochastic model of selected system parameters. While employing a relatively simple model of the aquifer, the ATESSS code plays an important role in the design and operation of ATES facilities by augmenting experience provided by the relatively few field experiments and demonstration projects. ATESSS has been used to characterize the effect of different pumping/injection schedules on a hypothetical ATES system and to estimate the recovery at the St. Paul, Minnesota, field experiment.
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
Abyaneh, M H; Wildman, R D; Ashcroft, I A; Ruiz, P D
2013-11-01
An analysis of the material properties of porcine corneas has been performed. A simple stress relaxation test was performed to determine the viscoelastic properties and a rheological model was built based on the Generalized Maxwell (GM) approach. A validation experiment using nano-indentation showed that an isotropic GM model was insufficient for describing the corneal material behaviour when exposed to a complex stress state. A new technique was proposed for determining the properties, using a combination of nano-indentation experiment, an isotropic and orthotropic GM model and inverse finite element method. The good agreement using this method suggests that this is a promising technique for measuring material properties in vivo and further work should focus on the reliability of the approach in practice. © 2013 Elsevier Ltd. All rights reserved.
Ng, Candy K S; Osuna-Sanchez, Hector; Valéry, Eric; Sørensen, Eva; Bracewell, Daniel G
2012-06-15
An integrated experimental and modeling approach for the design of high productivity protein A chromatography is presented to maximize productivity in bioproduct manufacture. The approach consists of four steps: (1) small-scale experimentation, (2) model parameter estimation, (3) productivity optimization and (4) model validation with process verification. The integrated use of process experimentation and modeling enables fewer experiments to be performed, and thus minimizes the time and materials required in order to gain process understanding, which is of key importance during process development. The application of the approach is demonstrated for the capture of antibody by a novel silica-based high performance protein A adsorbent named AbSolute. In the example, a series of pulse injections and breakthrough experiments were performed to develop a lumped parameter model, which was then used to find the best design that optimizes the productivity of a batch protein A chromatographic process for human IgG capture. An optimum productivity of 2.9 kg L⁻¹ day⁻¹ for a column of 5mm diameter and 8.5 cm length was predicted, and subsequently verified experimentally, completing the whole process design approach in only 75 person-hours (or approximately 2 weeks). Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wosnik, Martin; Bachant, Peter
2016-11-01
Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.
[Medical Image Registration Method Based on a Semantic Model with Directional Visual Words].
Jin, Yufei; Ma, Meng; Yang, Xin
2016-04-01
Medical image registration is very challenging due to the various imaging modality,image quality,wide inter-patients variability,and intra-patient variability with disease progressing of medical images,with strict requirement for robustness.Inspired by semantic model,especially the recent tremendous progress in computer vision tasks under bag-of-visual-word framework,we set up a novel semantic model to match medical images.Since most of medical images have poor contrast,small dynamic range,and involving only intensities and so on,the traditional visual word models do not perform very well.To benefit from the advantages from the relative works,we proposed a novel visual word model named directional visual words,which performs better on medical images.Then we applied this model to do medical registration.In our experiment,the critical anatomical structures were first manually specified by experts.Then we adopted the directional visual word,the strategy of spatial pyramid searching from coarse to fine,and the k-means algorithm to help us locating the positions of the key structures accurately.Sequentially,we shall register corresponding images by the areas around these positions.The results of the experiments which were performed on real cardiac images showed that our method could achieve high registration accuracy in some specific areas.
Modeling individual differences in working memory performance: a source activation account
Daily, Larry Z.; Lovett, Marsha C.; Reder, Lynne M.
2008-01-01
Working memory resources are needed for processing and maintenance of information during cognitive tasks. Many models have been developed to capture the effects of limited working memory resources on performance. However, most of these models do not account for the finding that different individuals show different sensitivities to working memory demands, and none of the models predicts individual subjects' patterns of performance. We propose a computational model that accounts for differences in working memory capacity in terms of a quantity called source activation, which is used to maintain goal-relevant information in an available state. We apply this model to capture the working memory effects of individual subjects at a fine level of detail across two experiments. This, we argue, strengthens the interpretation of source activation as working memory capacity. PMID:19079561
Modeling interfacial fracture in Sierra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Arthur A.; Ohashi, Yuki; Lu, Wei-Yang
2013-09-01
This report summarizes computational efforts to model interfacial fracture using cohesive zone models in the SIERRA/SolidMechanics (SIERRA/SM) finite element code. Cohesive surface elements were used to model crack initiation and propagation along predefined paths. Mesh convergence was observed with SIERRA/SM for numerous geometries. As the funding for this project came from the Advanced Simulation and Computing Verification and Validation (ASC V&V) focus area, considerable effort was spent performing verification and validation. Code verification was performed to compare code predictions to analytical solutions for simple three-element simulations as well as a higher-fidelity simulation of a double-cantilever beam. Parameter identification was conductedmore » with Dakota using experimental results on asymmetric double-cantilever beam (ADCB) and end-notched-flexure (ENF) experiments conducted under Campaign-6 funding. Discretization convergence studies were also performed with respect to mesh size and time step and an optimization study was completed for mode II delamination using the ENF geometry. Throughout this verification process, numerous SIERRA/SM bugs were found and reported, all of which have been fixed, leading to over a 10-fold increase in convergence rates. Finally, mixed-mode flexure experiments were performed for validation. One of the unexplained issues encountered was material property variability for ostensibly the same composite material. Since the variability is not fully understood, it is difficult to accurately assess uncertainty when performing predictions.« less
Re-assessment of feedbacks from biosphere to Indian Monsoon: RegCMv4.4.5.10 simulations
NASA Astrophysics Data System (ADS)
Lodh, A.
2016-12-01
Biosphere feedback plays an important role in the progression of moisture laden Indian summer monsoon winds over the land regions of India, towards the north-western regions of India, during the Indian summer monsoon regime. Hence, for understanding the biosphere-feedback to Indian monsoon numerical experiments for "control" and "design" cases are performed using ICTP RegCMv4.4.5.10 climate model forced with National Center for Atmospheric Research - II reanalysed fields. The RegCMv4.4.5.10 simulations are performed from 00GMT 1st November 1999 to 24 GMT 1st January 2011, with combination of mixed convective parameterization (viz. Emanuel and Grell) schemes over land and ocean, combined with "University of Washington- Planetary boundary layer" (UW-PBL) and Holtslag PBL scheme. Validation studies are then performed for correct representation of Indian summer monsoon features particularly precipitation and soil moisture. Then, four numerical experiments with LULCC change in the climate model are carried out (for the same initial, boundary forcings and time-period as in control experiment) for determining possible influence of vegetation cover (viz. extended desertification, deforestation and increase in afforestation, irrigated land) on Indian monsoon meteorology. Results from the extended desert and deforestation experiment, informs us that the moisture laden easterlies from Bay of Bengal are not able to move towards the land region owing to formation of anomalous anti-cyclone circulations, resulting in decrease in precipitation over India. From irrigation and afforestation experiment, it is found that there is increase in precipitation, precipitable water, recycling ratio, precipitation efficiency and development of anomalous cyclonic circulations over Central and North-west India. More details about the results from the numerical experiments performed will be explained.
Assessing model sensitivity and uncertainty across multiple Free-Air CO2 Enrichment experiments.
NASA Astrophysics Data System (ADS)
Cowdery, E.; Dietze, M.
2015-12-01
As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentrations are highly variable and contain a considerable amount of uncertainty. It is necessary that we understand which factors are driving this uncertainty. The Free-Air CO2 Enrichment (FACE) experiments have equipped us with a rich data source that can be used to calibrate and validate these model predictions. To identify and evaluate the assumptions causing inter-model differences we performed model sensitivity and uncertainty analysis across ambient and elevated CO2 treatments using the Data Assimilation Linked Ecosystem Carbon (DALEC) model and the Ecosystem Demography Model (ED2), two process-based models ranging from low to high complexity respectively. These modeled process responses were compared to experimental data from the Kennedy Space Center Open Top Chamber Experiment, the Nevada Desert Free Air CO2 Enrichment Facility, the Rhinelander FACE experiment, the Wyoming Prairie Heating and CO2 Enrichment Experiment, the Duke Forest Face experiment and the Oak Ridge Experiment on CO2 Enrichment. By leveraging data access proxy and data tilling services provided by the BrownDog data curation project alongside analysis modules available in the Predictive Ecosystem Analyzer (PEcAn), we produced automated, repeatable benchmarking workflows that are generalized to incorporate different sites and ecological models. Combining the observed patterns of uncertainty between the two models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.
MacGregor, James N
2015-10-01
Research on human performance in solving traveling salesman problems typically uses point sets as stimuli, and most models have proposed a processing stage at which stimulus dots are clustered. However, few empirical studies have investigated the effects of clustering on performance. In one recent study, researchers compared the effects of clustered, random, and regular stimuli, and concluded that clustering facilitates performance (Dry, Preiss, & Wagemans, 2012). Another study suggested that these results may have been influenced by the location rather than the degree of clustering (MacGregor, 2013). Two experiments are reported that mark an attempt to disentangle these factors. The first experiment tested several combinations of degree of clustering and cluster location, and revealed mixed evidence that clustering influences performance. In a second experiment, both factors were varied independently, showing that they interact. The results are discussed in terms of the importance of clustering effects, in particular, and perceptual factors, in general, during performance of the traveling salesman problem.
NASA Astrophysics Data System (ADS)
Arellano, A. F., Jr.; Tang, W.
2017-12-01
Assimilating observational data of chemical constituents into a modeling system is a powerful approach in assessing changes in atmospheric composition and estimating associated emissions. However, the results of such chemical data assimilation (DA) experiments are largely subject to various key factors such as: a) a priori information, b) error specification and representation, and c) structural biases in the modeling system. Here we investigate the sensitivity of an ensemble-based data assimilation state and emission estimates to these key factors. We focus on investigating the assimilation performance of the Community Earth System Model (CESM)/CAM-Chem with the Data Assimilation Research Testbed (DART) in representing biomass burning plumes in the Amazonia during the 2008 fire season. We conduct the following ensemble DA MOPITT CO experiments: 1) use of monthly-average NCAR's FINN surface fire emissionss, 2) use of daily FINN surface fire emissions, 3) use of daily FINN emissions with climatological injection heights, and 4) use of perturbed FINN emission parameters to represent not only the uncertainties in combustion activity but also in combustion efficiency. We show key diagnostics of assimilation performance for these experiments and verify with available ground-based and aircraft-based measurements.
Making governance work in the health care sector: evidence from a 'natural experiment' in Italy.
Nuti, Sabina; Vola, Federico; Bonini, Anna; Vainieri, Milena
2016-01-01
The Italian Health care System provides universal coverage for comprehensive health services and is mainly financed through general taxation. Since the early 1990s, a strong decentralization policy has been adopted in Italy and the state has gradually ceded its jurisdiction to regional governments, of which there are twenty. These regions now have political, administrative, fiscal and organizational responsibility for the provision of health care. This paper examines the different governance models that the regions have adopted and investigates the performance evaluation systems (PESs) associated with them, focusing on the experience of a network of ten regional governments that share the same PES. The article draws on the wide range of governance models and PESs in order to design a natural experiment. Through an analysis of 14 indicators measured in 2007 and in 2012 for all the regions, the study examines how different performance evaluation models are associated with different health care performances and whether the network-shared PES has made any difference to the results achieved by the regions involved. The initial results support the idea that systematic benchmarking and public disclosure of data are powerful tools to guarantee the balanced and sustained improvement of the health care systems, but only if they are integrated with the regional governance mechanisms.
A Bone Marrow Aspirate and Trephine Simulator.
Yap, Eng Soo; Koh, Pei Lin; Ng, Chin Hin; de Mel, Sanjay; Chee, Yen Lin
2015-08-01
Bone marrow aspirate and trephine (BMAT) biopsy is a commonly performed procedure in hematology-oncology practice. Although complications are uncommon, they can cause significant morbidity and mortality. Simulation models are an excellent tool to teach novice doctors basic procedural skills before performing the actual procedure on patients to improve patient safety and well-being. There are no commercial BMAT simulators, and this technical report describes the rationale, technical specifications, and construction of a low-cost, easily constructed, reusable BMAT simulator that reproduced the tactile properties of tissue layers for use as a teaching tool in our resident BMAT simulation course. Preliminary data of learner responses to the simulator were also collected. From April 2013 to November 2013, 32 internal medicine residents underwent the BMAT simulation course. Eighteen (56%) completed the online survey, 11 residents with previous experience doing BMAT and 7 without experience. Despite the difference in operative experience, both experienced and novice residents all agreed or strongly agreed that the model aided their understanding of the BMAT procedure. All agreed or strongly agreed that this enhanced their knowledge of anatomy and 16 residents (89%) agreed or strongly agreed that this model was a realistic simulator. We present a novel, low-cost, easily constructed, realistic BMAT simulator for training novice doctors to perform BMAT.
Dynamic performances analysis of a real vehicle driving
NASA Astrophysics Data System (ADS)
Abdullah, M. A.; Jamil, J. F.; Salim, M. A.
2015-12-01
Vehicle dynamic is the effects of movement of a vehicle generated from the acceleration, braking, ride and handling activities. The dynamic behaviours are determined by the forces from tire, gravity and aerodynamic which acting on the vehicle. This paper emphasizes the analysis of vehicle dynamic performance of a real vehicle. Real driving experiment on the vehicle is conducted to determine the effect of vehicle based on roll, pitch, and yaw, longitudinal, lateral and vertical acceleration. The experiment is done using the accelerometer to record the reading of the vehicle dynamic performance when the vehicle is driven on the road. The experiment starts with weighing a car model to get the center of gravity (COG) to place the accelerometer sensor for data acquisition (DAQ). The COG of the vehicle is determined by using the weight of the vehicle. A rural route is set to launch the experiment and the road conditions are determined for the test. The dynamic performance of the vehicle are depends on the road conditions and driving maneuver. The stability of a vehicle can be controlled by the dynamic performance analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ke; Garrett, John; Chen, Guang-Hong
2013-11-15
Purpose: With the recently expanding interest and developments in x-ray differential phase contrast CT (DPC-CT), the evaluation of its task-specific detection performance and comparison with the corresponding absorption CT under a given radiation dose constraint become increasingly important. Mathematical model observers are often used to quantify the performance of imaging systems, but their correlations with actual human observers need to be confirmed for each new imaging method. This work is an investigation of the effects of stochastic DPC-CT noise on the correlation of detection performance between model and human observers with signal-known-exactly (SKE) detection tasks.Methods: The detectabilities of different objectsmore » (five disks with different diameters and two breast lesion masses) embedded in an experimental DPC-CT noise background were assessed using both model and human observers. The detectability of the disk and lesion signals was then measured using five types of model observers including the prewhitening ideal observer, the nonprewhitening (NPW) observer, the nonprewhitening observer with eye filter and internal noise (NPWEi), the prewhitening observer with eye filter and internal noise (PWEi), and the channelized Hotelling observer (CHO). The same objects were also evaluated by four human observers using the two-alternative forced choice method. The results from the model observer experiment were quantitatively compared to the human observer results to assess the correlation between the two techniques.Results: The contrast-to-detail (CD) curve generated by the human observers for the disk-detection experiments shows that the required contrast to detect a disk is inversely proportional to the square root of the disk size. Based on the CD curves, the ideal and NPW observers tend to systematically overestimate the performance of the human observers. The NPWEi and PWEi observers did not predict human performance well either, as the slopes of their CD curves tended to be steeper. The CHO generated the best quantitative agreement with human observers with its CD curve overlapping with that of human observer. Statistical equivalence between CHO and humans can be claimed within 11% of the human observer results, including both the disk and lesion detection experiments.Conclusions: The model observer method can be used to accurately represent human observer performance with the stochastic DPC-CT noise for SKE tasks with sizes ranging from 8 to 128 pixels. The incorporation of the anatomical noise remains to be studied.« less
A Numerical and Experimental Study of Damage Growth in a Composite Laminate
NASA Technical Reports Server (NTRS)
McElroy, Mark; Ratcliffe, James; Czabaj, Michael; Wang, John; Yuan, Fuh-Gwo
2014-01-01
The present study has three goals: (1) perform an experiment where a simple laminate damage process can be characterized in high detail; (2) evaluate the performance of existing commercially available laminate damage simulation tools by modeling the experiment; (3) observe and understand the underlying physics of damage in a composite honeycomb sandwich structure subjected to low-velocity impact. A quasi-static indentation experiment has been devised to provide detailed information about a simple mixed-mode damage growth process. The test specimens consist of an aluminum honeycomb core with a cross-ply laminate facesheet supported on a stiff uniform surface. When the sample is subjected to an indentation load, the honeycomb core provides support to the facesheet resulting in a gradual and stable damage growth process in the skin. This enables real time observation as a matrix crack forms, propagates through a ply, and then causes a delamination. Finite element analyses were conducted in ABAQUS/Explicit(TradeMark) 6.13 that used continuum and cohesive modeling techniques to simulate facesheet damage and a geometric and material nonlinear model to simulate core crushing. The high fidelity of the experimental data allows a detailed investigation and discussion of the accuracy of each numerical modeling approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preece, D.S.
Pretest 3-D finite element calculations have been performed on the wedge pillar portion of the WIPP Geomechanical Evaluation Experiment. The wedge pillar separates two drifts that intersect at an angle of 7.5/sup 0/. Purpose of the experiment is to provide data on the creep behavior of the wedge and progressive failure at the tip. The first set of calculations utilized a symmetry plane on the center-line of the wedge which allowed treatment of the entire configuration by modeling half of the geometry. Two 3-D calculations in this first set were performed with different drift widths to study the influence ofmore » drift size on closure and maximum stress. A cross-section perpendicular to the wedge was also analyzed with 2-D finite element models and the results compared to the 3-D results. In another set of 3-D calculations both drifts were modeled but with less distance between the drifts and the outer boundaries. Results of these calculations are compared with results from the other calculations to better understand the influence of boundary conditions.« less
NASA Technical Reports Server (NTRS)
Bienert, W. B.
1974-01-01
The development and characteristics of electrical feedback controlled heat pipes (FCHP) are discussed. An analytical model was produced to describe the performance of the FCHP under steady state and transient conditions. An advanced thermal control flight experiment was designed to demonstrate the performance of the thermal control component in a space environment. The thermal control equipment was evaluated on the ATS-F satellite to provide performance data for the components and to act as a thermal control system which can be used to provide temperature stability of spacecraft components in future applications.
Clinical experience with a high-performance ATM-connected DICOM archive for cardiology
NASA Astrophysics Data System (ADS)
Solomon, Harry P.
1997-05-01
A system to archive large image sets, such as cardiac cine runs, with near realtime response must address several functional and performance issues, including efficient use of a high performance network connection with standard protocols, an architecture which effectively integrates both short- and long-term mass storage devices, and a flexible data management policy which allows optimization of image distribution and retrieval strategies based on modality and site-specific operational use. Clinical experience with such as archive has allowed evaluation of these systems issues and refinement of a traffic model for cardiac angiography.
A Caveat Note on Tuning in the Development of Coupled Climate Models
NASA Astrophysics Data System (ADS)
Dommenget, Dietmar; Rezny, Michael
2018-01-01
State-of-the-art coupled general circulation models (CGCMs) have substantial errors in their simulations of climate. In particular, these errors can lead to large uncertainties in the simulated climate response (both globally and regionally) to a doubling of CO2. Currently, tuning of the parameterization schemes in CGCMs is a significant part of the developed. It is not clear whether such tuning actually improves models. The tuning process is (in general) neither documented, nor reproducible. Alternative methods such as flux correcting are not used nor is it clear if such methods would perform better. In this study, ensembles of perturbed physics experiments are performed with the Globally Resolved Energy Balance (GREB) model to test the impact of tuning. The work illustrates that tuning has, in average, limited skill given the complexity of the system, the limited computing resources, and the limited observations to optimize parameters. While tuning may improve model performance (such as reproducing observed past climate), it will not get closer to the "true" physics nor will it significantly improve future climate change projections. Tuning will introduce artificial compensating error interactions between submodels that will hamper further model development. In turn, flux corrections do perform well in most, but not all aspects. A main advantage of flux correction is that it is much cheaper, simpler, more transparent, and it does not introduce artificial error interactions between submodels. These GREB model experiments should be considered as a pilot study to motivate further CGCM studies that address the issues of model tuning.
Multi-Evaporator Miniature Loop Heat Pipe for Small Spacecraft Thermal Control
NASA Technical Reports Server (NTRS)
Ku, Jentung; Ottenstein, Laura; Douglas, Donya
2008-01-01
This paper presents the development of the Thermal Loop experiment under NASA's New Millennium Program Space Technology 8 (ST8) Project. The Thermal Loop experiment was originally planned for validating in space an advanced heat transport system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers. Details of the thermal loop concept, technical advances and benefits, Level 1 requirements and the technology validation approach are described. An MLHP breadboard has been built and tested in the laboratory and thermal vacuum environments, and has demonstrated excellent performance that met or exceeded the design requirements. The MLHP retains all features of state-of-the-art loop heat pipes and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. In addition, an analytical model has been developed to simulate the steady state and transient operation of the MHLP, and the model predictions agreed very well with experimental results. A protoflight MLHP has been built and is being tested in a thermal vacuum chamber to validate its performance and technical readiness for a flight experiment.
Methods for Probabilistic Fault Diagnosis: An Electrical Power System Case Study
NASA Technical Reports Server (NTRS)
Ricks, Brian W.; Mengshoel, Ole J.
2009-01-01
Health management systems that more accurately and quickly diagnose faults that may occur in different technical systems on-board a vehicle will play a key role in the success of future NASA missions. We discuss in this paper the diagnosis of abrupt continuous (or parametric) faults within the context of probabilistic graphical models, more specifically Bayesian networks that are compiled to arithmetic circuits. This paper extends our previous research, within the same probabilistic setting, on diagnosis of abrupt discrete faults. Our approach and diagnostic algorithm ProDiagnose are domain-independent; however we use an electrical power system testbed called ADAPT as a case study. In one set of ADAPT experiments, performed as part of the 2009 Diagnostic Challenge, our system turned out to have the best performance among all competitors. In a second set of experiments, we show how we have recently further significantly improved the performance of the probabilistic model of ADAPT. While these experiments are obtained for an electrical power system testbed, we believe they can easily be transitioned to real-world systems, thus promising to increase the success of future NASA missions.
Axi-symmetric patterns of active polar filaments on spherical and composite surfaces
NASA Astrophysics Data System (ADS)
Srivastava, Pragya; Rao, Madan
2014-03-01
Experiments performed on Fission Yeast cells of cylindrical and spherical shapes, rod-shaped bacteria and reconstituted cylindrical liposomes suggest the influence of cell geometry on patterning of cortical actin. A theoretical model based on active hydrodynamic description of cortical actin that includes curvature-orientation coupling predicts spontaneous formation of acto-myosin rings, cables and nodes on cylindrical and spherical geometries [P. Srivastava et al, PRL 110, 168104(2013)]. Stability and dynamics of these patterns is also affected by the cellular shape and has been observed in experiments performed on Fission Yeast cells of spherical shape. Motivated by this, we study the stability and dynamics of axi-symmetric patterns of active polar filaments on the surfaces of spherical, saddle shaped and conical geometry and classify the stable steady state patterns on these surfaces. Based on the analysis of the fluorescence images of Myosin-II during ring slippage we propose a simple mechanical model for ring-sliding based on force balance and make quantitative comparison with the experiments performed on Fission Yeast cells. NSF Grant DMR-1004789 and Syracuse Soft Matter Program.
Research in Lattice Gauge Theory and in the Phenomenology of Neutrinos and Dark Matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meurice, Yannick L; Reno, Mary Hall
Research in theoretical elementary particle physics was performed by the PI Yannick Meurice and co-PI Mary Hall Reno. New techniques designed for precision calculations of strong interaction physics were developed using the tensor renormalization group method. Large-scale Monte Carlo simulations with dynamical quarks were performed for candidate models for Higgs compositeness. Ab-initio lattice gauge theory calculations of semileptonic decays of B-mesons observed in collider experiments and relevant to test the validity of the standard model were performed with the Fermilab/MILC collaboration. The phenomenology of strong interaction physics was applied to new predictions for physics processes in accelerator physics experiments andmore » to cosmic ray production and interactions. A research focus has been on heavy quark production and their decays to neutrinos. The heavy quark contributions to atmospheric neutrino and muon fluxes have been evaluated, as have the neutrino fluxes from accelerator beams incident on heavy targets. Results are applicable to current and future particle physics experiments and to astrophysical neutrino detectors such as the IceCube Neutrino Observatory.« less
Parametric study of different contributors to tumor thermal profile
NASA Astrophysics Data System (ADS)
Tepper, Michal; Gannot, Israel
2014-03-01
Treating cancer is one of the major challenges of modern medicine. There is great interest in assessing tumor development in in vivo animal and human models, as well as in in vitro experiments. Existing methods are either limited by cost and availability or by their low accuracy and reproducibility. Thermography holds the potential of being a noninvasive, low-cost, irradiative and easy-to-use method for tumor monitoring. Tumors can be detected in thermal images due to their relatively higher or lower temperature compared to the temperature of the healthy skin surrounding them. Extensive research is performed to show the validity of thermography as an efficient method for tumor detection and the possibility of extracting tumor properties from thermal images, showing promising results. However, deducing from one type of experiment to others is difficult due to the differences in tumor properties, especially between different types of tumors or different species. There is a need in a research linking different types of tumor experiments. In this research, parametric analysis of possible contributors to tumor thermal profiles was performed. The effect of tumor geometric, physical and thermal properties was studied, both independently and together, in phantom model experiments and computer simulations. Theoretical and experimental results were cross-correlated to validate the models used and increase the accuracy of simulated complex tumor models. The contribution of different parameters in various tumor scenarios was estimated and the implication of these differences on the observed thermal profiles was studied. The correlation between animal and human models is discussed.
Scenarios and performance measures for advanced ISDN satellite design and experiments
NASA Technical Reports Server (NTRS)
Pepin, Gerard R.
1991-01-01
Described here are the contemplated input and expected output for the Interim Service Integrated Services Digital Network (ISDN) Satellite (ISIS) and Full Service ISDN Satellite (FSIS) Models. The discrete event simulations of these models are presented with specific scenarios that stress ISDN satellite parameters. Performance measure criteria are presented for evaluating the advanced ISDN communication satellite designs of the NASA Satellite Communications Research (SCAR) Program.
Model comparisons of the reactive burn model SURF in three ASC codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitley, Von Howard; Stalsberg, Krista Lynn; Reichelt, Benjamin Lee
A study of the SURF reactive burn model was performed in FLAG, PAGOSA and XRAGE. In this study, three different shock-to-detonation transition experiments were modeled in each code. All three codes produced similar model results for all the experiments modeled and at all resolutions. Buildup-to-detonation time, particle velocities and resolution dependence of the models was notably similar between the codes. Given the current PBX 9502 equations of state and SURF calibrations, each code is equally capable of predicting the correct detonation time and distance when impacted by a 1D impactor at pressures ranging from 10-16 GPa, as long as themore » resolution of the mesh is not too coarse.« less
LANDSAT-D MSS/TM tuned orbital jitter analysis model LDS900
NASA Technical Reports Server (NTRS)
Pollak, T. E.
1981-01-01
The final LANDSAT-D orbital dynamic math model (LSD900), comprised of all test validated substructures, was used to evaluate the jitter response of the MSS/TM experiments. A dynamic forced response analysis was performed at both the MSS and TM locations on all structural modes considered (thru 200 Hz). The analysis determined the roll angular response of the MSS/TM experiments to improve excitation generated by component operation. Cross axis and cross experiment responses were also calculated. The excitations were analytically represented by seven and nine term Fourier series approximations, for the MSS and TM experiment respectively, which enabled linear harmonic solution techniques to be applied to response calculations. Single worst case jitter was estimated by variations of the eigenvalue spectrum of model LSD 900. The probability of any worst case mode occurrence was investigated.
Physical Vapor Transport of Mercurous Chloride Crystals: Design of a Microgravity Experiment
NASA Technical Reports Server (NTRS)
Duval, W, M. B.; Singh, N. B.; Glicksman, M. E.
1997-01-01
Flow field characteristics predicted from a computational model show that the dynamical state of the flow, for practical crystal growth conditions of mercurous chloride, can range from steady to unsteady. Evidence that the flow field can be strongly dominated by convection for ground-based conditions is provided by the prediction of asymmetric velocity profiles bv the model which show reasonable agreement with laser Doppler velocimetry experiments in both magnitude and planform. Unsteady flow is shown to be correlated with a degradation of crystal quality as quantified by light scattering pattern measurements, A microgravity experiment is designed to show that an experiment performed with parameters which yield an unsteady flow becomes steady (diffusive-advective) in a microgravity environment of 10(exp -3) g(sub 0) as predicted by the model, and hence yields crystals with optimal quality.
A Spectral Evaluation of Models Performances in Mediterranean Oak Woodlands
NASA Astrophysics Data System (ADS)
Vargas, R.; Baldocchi, D. D.; Abramowitz, G.; Carrara, A.; Correia, A.; Kobayashi, H.; Papale, D.; Pearson, D.; Pereira, J.; Piao, S.; Rambal, S.; Sonnentag, O.
2009-12-01
Ecosystem processes are influenced by climatic trends at multiple temporal scales including diel patterns and other mid-term climatic modes, such as interannual and seasonal variability. Because interactions between biophysical components of ecosystem processes are complex, it is important to test how models perform in frequency (e.g. hours, days, weeks, months, years) and time (i.e. day of the year) domains in addition to traditional tests of annual or monthly sums. Here we present a spectral evaluation using wavelet time series analysis of model performance in seven Mediterranean Oak Woodlands that encompass three deciduous and four evergreen sites. We tested the performance of five models (CABLE, ORCHIDEE, BEPS, Biome-BGC, and JULES) on measured variables of gross primary production (GPP) and evapotranspiration (ET). In general, model performance fails at intermediate periods (e.g. weeks to months) likely because these models do not represent the water pulse dynamics that influence GPP and ET at these Mediterranean systems. To improve the performance of a model it is critical to identify first where and when the model fails. Only by identifying where a model fails we can improve the model performance and use them as prognostic tools and to generate further hypotheses that can be tested by new experiments and measurements.
Chaibub Neto, Elias; Bare, J. Christopher; Margolin, Adam A.
2014-01-01
New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where “omics” features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights. PMID:25289666
A comparison of linear and non-linear data assimilation methods using the NEMO ocean model
NASA Astrophysics Data System (ADS)
Kirchgessner, Paul; Tödter, Julian; Nerger, Lars
2015-04-01
The assimilation behavior of the widely used LETKF is compared with the Equivalent Weight Particle Filter (EWPF) in a data assimilation application with an idealized configuration of the NEMO ocean model. The experiments show how the different filter methods behave when they are applied to a realistic ocean test case. The LETKF is an ensemble-based Kalman filter, which assumes Gaussian error distributions and hence implicitly requires model linearity. In contrast, the EWPF is a fully nonlinear data assimilation method that does not rely on a particular error distribution. The EWPF has been demonstrated to work well in highly nonlinear situations, like in a model solving a barotropic vorticity equation, but it is still unknown how the assimilation performance compares to ensemble Kalman filters in realistic situations. For the experiments, twin assimilation experiments with a square basin configuration of the NEMO model are performed. The configuration simulates a double gyre, which exhibits significant nonlinearity. The LETKF and EWPF are both implemented in PDAF (Parallel Data Assimilation Framework, http://pdaf.awi.de), which ensures identical experimental conditions for both filters. To account for the nonlinearity, the assimilation skill of the two methods is assessed by using different statistical metrics, like CRPS and Histograms.
Gas Transport through Fractured Rock near the U20az Borehole, Pahute Mesa, Nevada.
NASA Astrophysics Data System (ADS)
Rockhold, M.; Lowrey, J. D.; Kirkham, R.; Olsen, K.; Waichler, S.; White, M. D.; Wurstner White, S.
2017-12-01
Field experiments were performed in 2012-13 and 2016-17 at the U-20az testbed at the Nevada National Security Site to develop and evaluate capabilities for monitoring and modeling noble gas transport associated with underground nuclear explosions (UNE). Experiments were performed by injecting both chemical (CF2BR2, SF6) and radioactive (37Ar, 127Xe) gas species into the deep subsurface at this legacy UNE site and monitoring the breakthrough of the gases at different locations on or near the ground surface. Gas pressures were also monitored in both the chimney and at ground surface. Field experiments were modeled using the parallel, non-isothermal, two-phase flow and transport simulator, STOMP-GT. A site conceptual-numerical model was developed from a geologic framework model, and using a dual-porosity/permeability model for the constitutive relative permeability-saturation-capillary pressure relations of the fractured rock units. Comparisons of observed and simulated gas species concentrations show that diffusion is a highly effective transport mechanism under ambient conditions in the water-unsaturated fractured rock. Over-pressurization of the cavity during one of the field campaigns, and barometric pressure fluctuations are shown to result in enhanced gas transport by advection through fractures.
Stevens, Courtney; Liu, Cindy H; Chen, Justin A
2018-03-22
Using data from 69,722 US undergraduates participating in the spring 2015 National College Health Assessment, we examine racial/ethnic differences in students' experience of discrimination. Logistic regression predicted the experience of discrimination and its reported negative effect on academics. Additional models examined the effect of attending a Minority Serving Institution (MSI). Discrimination was experienced by 5-15% of students, with all racial/ethnic minority groups examined- including Black, Hispanic, Asian, AI/NA/NA, and Multiracial students- more likely to report discrimination relative to White students. Of students who experienced discrimination, 15-25% reported it had negatively impacted their academic performance, with Hispanic and Asian students more likely to report negative impacts relative to White students. Attending an MSI was associated with decreased experiences of discrimination. Students from racial/ethnic minority backgrounds are disproportionately affected by discrimination, with negative impacts for academic performance that are particularly marked for Hispanic and Asian students.
Test of phi(sup 2) model predictions near the (sup 3)He liquid-gas critical point
NASA Technical Reports Server (NTRS)
Barmatz, M.; Zhong, F.; Hahn, I.
2000-01-01
NASA is supporting the development of an experiment called MISTE (Microgravity Scaling Theory Experiment) for future International Space Station mission. The main objective of this flight experiment is to perform in-situ PVT, heat capacity at constant volume, C(sub v) and chi(sub tau), measurements in the asymptotic region near the (sup 3)He liquid-gas critical point.
Accurate and dynamic predictive model for better prediction in medicine and healthcare.
Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S
2018-05-01
Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barani, T.; Bruschi, E.; Pizzocri, D.
The modelling of fission gas behaviour is a crucial aspect of nuclear fuel analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. Experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of burst release in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which ismore » applied as an extension of diffusion-based models to allow for the burst release effect. The concept and governing equations of the model are presented, and the effect of the newly introduced parameters is evaluated through an analytic sensitivity analysis. Then, the model is assessed for application to integral fuel rod analysis. The approach that we take for model assessment involves implementation in two structurally different fuel performance codes, namely, BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D semi-analytic code). The model is validated against 19 Light Water Reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the qualitative representation of the FGR kinetics and the quantitative predictions of integral fuel rod FGR, relative to the canonical, purely diffusion-based models, with both codes. The overall quantitative improvement of the FGR predictions in the two codes is comparable. Furthermore, calculated radial profiles of xenon concentration are investigated and compared to experimental data, demonstrating the representation of the underlying mechanisms of burst release by the new model.« less
Self similarities in desalination dynamics and performance using capacitive deionization.
Ramachandran, Ashwin; Hemmatifar, Ali; Hawks, Steven A; Stadermann, Michael; Santiago, Juan G
2018-09-01
Charge transfer and mass transport are two underlying mechanisms which are coupled in desalination dynamics using capacitive deionization (CDI). We developed simple reduced-order models based on a mixed reactor volume principle which capture the coupled dynamics of CDI operation using closed-form semi-analytical and analytical solutions. We use the models to identify and explore self-similarities in the dynamics among flow rate, current, and voltage for CDI cell operation including both charging and discharging cycles. The similarity approach identifies the specific combination of cell (e.g. capacitance, resistance) and operational parameters (e.g. flow rate, current) which determine a unique effluent dynamic response. We here demonstrate self-similarity using a conventional flow between CDI (fbCDI) architecture, and we hypothesize that our similarity approach has potential application to a wide range of designs. We performed an experimental study of these dynamics and used well-controlled experiments of CDI cell operation to validate and explore limits of the model. For experiments, we used a CDI cell with five electrode pairs and a standard flow between (electrodes) architecture. Guided by the model, we performed a series of experiments that demonstrate natural response of the CDI system. We also identify cell parameters and operation conditions which lead to self-similar dynamics under a constant current forcing function and perform a series of experiments by varying flowrate, currents, and voltage thresholds to demonstrate self-similarity. Based on this study, we hypothesize that the average differential electric double layer (EDL) efficiency (a measure of ion adsorption rate to EDL charging rate) is mainly dependent on user-defined voltage thresholds, whereas flow efficiency (measure of how well desalinated water is recovered from inside the cell) depends on cell volumes flowed during charging, which is determined by flowrate, current and voltage thresholds. Results of experiments strongly support this hypothesis. Results show that cycle efficiency and salt removal for a given flowrate and current are maximum when average EDL and flow efficiencies are approximately equal. We further explored a range of CC operations with varying flowrates, currents, and voltage thresholds using our similarity variables to highlight trade-offs among salt removal, energy, and throughput performance. Copyright © 2018 Elsevier Ltd. All rights reserved.
Reevaluation of a walleye (Sander vitreus) bioenergetics model
Madenjian, Charles P.; Wang, Chunfang
2013-01-01
Walleye (Sander vitreus) is an important sport fish throughout much of North America, and walleye populations support valuable commercial fisheries in certain lakes as well. Using a corrected algorithm for balancing the energy budget, we reevaluated the performance of the Wisconsin bioenergetics model for walleye in the laboratory. Walleyes were fed rainbow smelt (Osmerus mordax) in four laboratory tanks each day during a 126-day experiment. Feeding rates ranged from 1.4 to 1.7 % of walleye body weight per day. Based on a statistical comparison of bioenergetics model predictions of monthly consumption with observed monthly consumption, we concluded that the bioenergetics model estimated food consumption by walleye without any significant bias. Similarly, based on a statistical comparison of bioenergetics model predictions of weight at the end of the monthly test period with observed weight, we concluded that the bioenergetics model predicted walleye growth without any detectable bias. In addition, the bioenergetics model predictions of cumulative consumption over the 126-day experiment differed fromobserved cumulative consumption by less than 10 %. Although additional laboratory and field testing will be needed to fully evaluate model performance, based on our laboratory results, the Wisconsin bioenergetics model for walleye appears to be providing unbiased predictions of food consumption.
NASA Astrophysics Data System (ADS)
Hirata, N.; Tsuruoka, H.; Yokoi, S.
2011-12-01
The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.
NASA Astrophysics Data System (ADS)
Hirata, N.; Tsuruoka, H.; Yokoi, S.
2013-12-01
The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.
Oceanic response to tropical cyclone `Phailin' in the Bay of Bengal
NASA Astrophysics Data System (ADS)
Pant, V.; Prakash, K. R.
2016-02-01
Vertical mixing largely explains surface cooling induced by Tropical Cyclones (TCs). However, TC-induced upwelling of deeper waters plays an important role as it partly balances the warming of subsurface waters induced by vertical mixing. Below 100 m, vertical advection results in cooling that persists for a few days after the storm. The present study investigates the integrated ocean response to tropical cyclone `Phaillin' (10-14 October 2013) in the Bay of Bengal (BoB) through both coupled and stand-alone ocean-atmosphere models. Two numerical experiments with different coupling configurations between Regional Ocean Modelling System (ROMS) and Weather Research and Forecasting (WRF) were performed to investigate the impact of Phailin cyclone on the surface and sub-surface oceanic parameters. In the first experiment, ocean circulation model ROMS observe surface wind forcing from a mesoscale atmospheric model (WRF with nested damin setup), while rest forcing parameters are supplied to ROMS from NCEP data. In the second experiment, all surface forcing data to ROMS directly comes from WRF. The modeling components and data fields exchanged between atmospheric and oceanic models are described. The coupled modeling system is used to identify model sensitivity by exchanging prognostic variable fields between the two model components during simulation of Phallin cyclone (10-14 October 2013) in the BoB.In general, the simulated Phailin cyclone track and intensities agree well with observations in WRF simulations. Further, the inter-comparison between stand-alone and coupled model simulations validated against observations highlights better performance of coupled modeling system in simulating the oceanic conditions during the Phailin cyclone event.
NASA Astrophysics Data System (ADS)
Ermakov, Ilya; Crucifix, Michel; Munhoven, Guy
2013-04-01
Complex climate models require high computational burden. However, computational limitations may be avoided by using emulators. In this work we present several approaches for dynamical emulation (also called metamodelling) of the Multi-Box Model (MBM) coupled to the Model of Early Diagenesis in the Upper Sediment A (MEDUSA) that simulates the carbon cycle of the ocean and atmosphere [1]. We consider two experiments performed on the MBM-MEDUSA that explore the Basin-to-Shelf Transfer (BST) dynamics. In both experiments the sea level is varied according to a paleo sea level reconstruction. Such experiments are interesting because the BST is an important cause of the CO2 variation and the dynamics is potentially nonlinear. The output that we are interested in is the variation of the carbon dioxide partial pressure in the atmosphere over the Pleistocene. The first experiment considers that the BST is fixed constant during the simulation. In the second experiment the BST is interactively adjusted according to the sea level, since the sea level is the primary control of the growth and decay of coral reefs and other shelf carbon reservoirs. The main aim of the present contribution is to create a metamodel of the MBM-MEDUSA using the Dynamic Emulation Modelling methodology [2] and compare the results obtained using linear and non-linear methods. The first step in the emulation methodology used in this work is to identify the structure of the metamodel. In order to select an optimal approach for emulation we compare the results of identification obtained by the simple linear and more complex nonlinear models. In order to identify the metamodel in the first experiment the simple linear regression and the least-squares method is sufficient to obtain a 99,9% fit between the temporal outputs of the model and the metamodel. For the second experiment the MBM's output is highly nonlinear. In this case we apply nonlinear models, such as, NARX, Hammerstein model, and an 'ad-hoc' switching model. After the identification we perform the parameter mapping using spline interpolation and validate the emulator on a new set of parameters. References: [1] G. Munhoven, "Glacial-interglacial rain ratio changes: Implications for atmospheric CO2 and ocean-sediment interaction," Deep-Sea Res Pt II, vol. 54, pp. 722-746, 2007. [2] A. Castelletti et al., "A general framework for Dynamic Emulation Modelling in environmental problems," Environ Modell Softw, vol. 34, pp. 5-18, 2012.
Relationship between brain plasticity, learning and foraging performance in honey bees.
Cabirol, Amélie; Cope, Alex J; Barron, Andrew B; Devaud, Jean-Marc
2018-01-01
Brain structure and learning capacities both vary with experience, but the mechanistic link between them is unclear. Here, we investigated whether experience-dependent variability in learning performance can be explained by neuroplasticity in foraging honey bees. The mushroom bodies (MBs) are a brain center necessary for ambiguous olfactory learning tasks such as reversal learning. Using radio frequency identification technology, we assessed the effects of natural variation in foraging activity, and the age when first foraging, on both performance in reversal learning and on synaptic connectivity in the MBs. We found that reversal learning performance improved at foraging onset and could decline with greater foraging experience. If bees started foraging before the normal age, as a result of a stress applied to the colony, the decline in learning performance with foraging experience was more severe. Analyses of brain structure in the same bees showed that the total number of synaptic boutons at the MB input decreased when bees started foraging, and then increased with greater foraging intensity. At foraging onset MB structure is therefore optimized for bees to update learned information, but optimization of MB connectivity deteriorates with foraging effort. In a computational model of the MBs sparser coding of information at the MB input improved reversal learning performance. We propose, therefore, a plausible mechanistic relationship between experience, neuroplasticity, and cognitive performance in a natural and ecological context.
Weinstock, Peter; Rehder, Roberta; Prabhu, Sanjay P; Forbes, Peter W; Roussin, Christopher J; Cohen, Alan R
2017-07-01
OBJECTIVE Recent advances in optics and miniaturization have enabled the development of a growing number of minimally invasive procedures, yet innovative training methods for the use of these techniques remain lacking. Conventional teaching models, including cadavers and physical trainers as well as virtual reality platforms, are often expensive and ineffective. Newly developed 3D printing technologies can recreate patient-specific anatomy, but the stiffness of the materials limits fidelity to real-life surgical situations. Hollywood special effects techniques can create ultrarealistic features, including lifelike tactile properties, to enhance accuracy and effectiveness of the surgical models. The authors created a highly realistic model of a pediatric patient with hydrocephalus via a unique combination of 3D printing and special effects techniques and validated the use of this model in training neurosurgery fellows and residents to perform endoscopic third ventriculostomy (ETV), an effective minimally invasive method increasingly used in treating hydrocephalus. METHODS A full-scale reproduction of the head of a 14-year-old adolescent patient with hydrocephalus, including external physical details and internal neuroanatomy, was developed via a unique collaboration of neurosurgeons, simulation engineers, and a group of special effects experts. The model contains "plug-and-play" replaceable components for repetitive practice. The appearance of the training model (face validity) and the reproducibility of the ETV training procedure (content validity) were assessed by neurosurgery fellows and residents of different experience levels based on a 14-item Likert-like questionnaire. The usefulness of the training model for evaluating the performance of the trainees at different levels of experience (construct validity) was measured by blinded observers using the Objective Structured Assessment of Technical Skills (OSATS) scale for the performance of ETV. RESULTS A combination of 3D printing technology and casting processes led to the creation of realistic surgical models that include high-fidelity reproductions of the anatomical features of hydrocephalus and allow for the performance of ETV for training purposes. The models reproduced the pulsations of the basilar artery, ventricles, and cerebrospinal fluid (CSF), thus simulating the experience of performing ETV on an actual patient. The results of the 14-item questionnaire showed limited variability among participants' scores, and the neurosurgery fellows and residents gave the models consistently high ratings for face and content validity. The mean score for the content validity questions (4.88) was higher than the mean score for face validity (4.69) (p = 0.03). On construct validity scores, the blinded observers rated performance of fellows significantly higher than that of residents, indicating that the model provided a means to distinguish between novice and expert surgical skills. CONCLUSIONS A plug-and-play lifelike ETV training model was developed through a combination of 3D printing and special effects techniques, providing both anatomical and haptic accuracy. Such simulators offer opportunities to accelerate the development of expertise with respect to new and novel procedures as well as iterate new surgical approaches and innovations, thus allowing novice neurosurgeons to gain valuable experience in surgical techniques without exposing patients to risk of harm.
Kim, Hee Man; Yang, Sungwook; Kim, Jinseok; Park, Semi; Cho, Jae Hee; Park, Jeong Youp; Kim, Tae Song; Yoon, Eui-Sung; Song, Si Young; Bang, Seungmin
2010-08-01
Capsule endoscopy that could actively move and approach a specific site might be more valuable for the diagnosis or treatment of GI diseases. We tested the performance of active locomotion of a novel wired capsule endoscope with a paddling-based locomotion mechanism, using 3 models: a silicone tube, an extracted porcine colon, and a living pig. In vitro, ex vivo, and in vivo experiments in a pig model. Study in an animal laboratory. For the in vitro test, the locomotive capsule was controlled to actively move from one side of a silicone tube to the other by a controller-operated automatic traveling program. The velocity was calculated by following a video recording. We performed ex vivo tests by using an extracted porcine colon in the same manner we performed the in vitro test. In in vivo experiments, the capsule was inserted into the rectum of a living pig under anesthesia, and was controlled to move automatically forward. After 8 consecutive trials, the velocity was calculated. Elapsed time, velocity, and mucosal damage. The locomotive capsule showed stable and active movement inside the lumen both in vitro and ex vivo. The velocity was 60 cm/min in the silicone tube, and 36.8 and 37.5 cm/min in the extracted porcine colon. In the in vivo experiments, the capsule stably moved forward inside the colon of a living pig without any serious complications. The mean velocity was 17 cm/min over 40 cm length. We noted pinpoint erythematous mucosal injuries in the colon. Porcine model experiments, wired capsule endoscope. The novel paddling-based locomotive capsule endoscope performed fast and stable movement in a living pig colon with consistent velocity. Further investigation is necessary for practical use in humans. Copyright 2010 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
González-Rojí, Santos J.; Sáenz, Jon; Ibarra-Berastegi, Gabriel
2016-04-01
A numerical downscaling exercise over the Iberian Peninsula has been run nesting the WRF model inside ERA Interim. The Iberian Peninsula has been covered by a 15km x 15 km grid with 51 vertical levels. Two model configurations have been tested in two experiments spanning the period 2010-2014 after a one year spin-up (2009). In both cases, the model uses high resolution daily-varying SST fields and the Noah land surface model. In the first experiment (N), after the model is initialised, boundary conditions drive the model, as usual in numerical downscaling experiments. The second experiment (D) is configured the same way as the N case, but 3DVAR data assimilation is run every six hours (00Z, 06Z, 12Z and 18Z) using observations obtained from the PREPBUFR dataset (NCEP ADP Global Upper Air and Surface Weather Observations) using a 120' window around analysis times. For the data assimilation experiment (D), seasonally (monthly) varying background error covariance matrices have been prepared according to the parameterisations used and the mesoscale model domain. For both N and D runs, the moisture balance of the model runs has been evaluated over the Iberian Peninsula, both internally according to the model results (moisture balance in the model) and also in terms of the observed moisture fields from observational datasets (particularly precipitable water and precipitation from observations). Verification has been performed both at the daily and monthly time scales. The verification has also been performed for ERA Interim, the driving coarse-scale dataset used to drive the regional model too. Results show that the leading terms that must be considered over the area are the tendency in the precipitable water column, the divergence of moisture flux, evaporation (computed from latent heat flux at the surface) and precipitation. In the case of ERA Interim, the divergence of Qc is also relevant, although still a minor player in the moisture balance. Both mesoscale model runs are more effective at closing the moisture balance over the whole Iberian Peninsula than ERA Interim. The N experiment (no data assimilation) shows a better closure than the D case, as could be expected from the lack of analysis increments in it. This result is robust both at the daily and monthly time scales. Both ERA Interim and the D experiment produce a negative residual in the balance equation (compatible with excess evaporation or increased convergence of moisture over the Iberian Peninsula). This is a result of the data assimilation process in the D dataset, since in the N experiment the residual is mainly positive. The seasonal cycle of evaporation is much closer in the D experiment to the one in ERA Interim than in the N case, with a higher evaporation during summer months. However, both regional climate model runs show a lower evaporation rate than ERA Interim, particularly during summer months.
NASA Technical Reports Server (NTRS)
Kurzeja, R. J.; Haggard, K. V.; Grose, W. L.
1981-01-01
Three experiments have been performed using a three-dimensional, spectral quasi-geostrophic model in order to investigate the sensitivity of ozone transport to tropospheric orographic and thermal effects and to the zonal wind distribution. In the first experiment, the ozone distribution averaged over the last 30 days of a 60 day transport simulation was determined; in the second experiment, the transport simulation was repeated, but nonzonal orographic and thermal forcing was omitted; and in the final experiment, the simulation was conducted with the intensity and position of the stratospheric jets altered by addition of a Newtonian cooling term to the zonal-mean diabatic heating rate. Results of the three experiments are summarized by comparing the zonal-mean ozone distribution, the amplitude of eddy geopotential height, the zonal winds, and zonal-mean diabatic heating.
Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent
2014-01-01
Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals.
Sabouri, Sepideh; Matene, Elhacene; Vinet, Alain; Richer, Louis-Philippe; Cardinal, René; Armour, J. Andrew; Pagé, Pierre; Kus, Teresa; Jacquemet, Vincent
2014-01-01
Epicardial high-density electrical mapping is a well-established experimental instrument to monitor in vivo the activity of the atria in response to modulations of the autonomic nervous system in sinus rhythm. In regions that are not accessible by epicardial mapping, noncontact endocardial mapping performed through a balloon catheter may provide a more comprehensive description of atrial activity. We developed a computer model of the canine right atrium to compare epicardial and noncontact endocardial mapping. The model was derived from an experiment in which electroanatomical reconstruction, epicardial mapping (103 electrodes), noncontact endocardial mapping (2048 virtual electrodes computed from a 64-channel balloon catheter), and direct-contact endocardial catheter recordings were simultaneously performed in a dog. The recording system was simulated in the computer model. For simulations and experiments (after atrio-ventricular node suppression), activation maps were computed during sinus rhythm. Repolarization was assessed by measuring the area under the atrial T wave (ATa), a marker of repolarization gradients. Results showed an epicardial-endocardial correlation coefficients of 0.80 and 0.63 (two dog experiments) and 0.96 (simulation) between activation times, and a correlation coefficients of 0.57 and 0.46 (two dog experiments) and 0.92 (simulation) between ATa values. Despite distance (balloon-atrial wall) and dimension reduction (64 electrodes), some information about atrial repolarization remained present in noncontact signals. PMID:24598778
Effects of moisture controlled charcoal on indoor thermal and air environments
NASA Astrophysics Data System (ADS)
Matsumoto, Hiroshi; Yokogoshi, Midori; Nabeshima, Yuki
2017-10-01
It is crucial to remove and control indoor moisture in Japan, especially in hot and humid summers, in order to improve thermal comfort and save energy in buildings. Charcoal for moisture control made from the waste of wood material has attracted attention among many control strategies to control indoor moisture, and it is beginning to be used in houses. However, the basic characteristics of the charcoal to control moisture and remove chemical compounds in indoor air have not been investigated sufficiently. The objective of this study is to clarify the effect of moisture control charcoal on indoor thermal and air environments by a long-term field measurement using two housing scale models with/without charcoal in Toyohashi, Japan. The comparative experiments to investigate the effect of the charcoal on air temperature and humidity for two models with/without charcoal were conducted from 2015 to 2016. Also, the removal performance of volatile organic compound (VOCs) was investigated in the summer of 2015. Four bags of packed charcoal were set on the floor in the attic for one model during the experiment. As a result of the experiments, a significant effect of moisture control was observed in hot and humid season, and the efficient effect of moisture adsorption was obtained by the periodic humidification experiment using a humidifier. Furthermore, the charcoal showed a remarkable performance of VOC removal from indoor air by the injection experiment of formaldehyde.
Evaluation of the flame propagation within an SI engine using flame imaging and LES
NASA Astrophysics Data System (ADS)
He, Chao; Kuenne, Guido; Yildar, Esra; van Oijen, Jeroen; di Mare, Francesca; Sadiki, Amsini; Ding, Carl-Philipp; Baum, Elias; Peterson, Brian; Böhm, Benjamin; Janicka, Johannes
2017-11-01
This work shows experiments and simulations of the fired operation of a spark ignition engine with port-fuelled injection. The test rig considered is an optically accessible single cylinder engine specifically designed at TU Darmstadt for the detailed investigation of in-cylinder processes and model validation. The engine was operated under lean conditions using iso-octane as a substitute for gasoline. Experiments have been conducted to provide a sound database of the combustion process. A planar flame imaging technique has been applied within the swirl- and tumble-planes to provide statistical information on the combustion process to complement a pressure-based comparison between simulation and experiments. This data is then analysed and used to assess the large eddy simulation performed within this work. For the simulation, the engine code KIVA has been extended by the dynamically thickened flame model combined with chemistry reduction by means of pressure dependent tabulation. Sixty cycles have been simulated to perform a statistical evaluation. Based on a detailed comparison with the experimental data, a systematic study has been conducted to obtain insight into the most crucial modelling uncertainties.
NASA Astrophysics Data System (ADS)
Zhirkin, A. V.; Alekseev, P. N.; Batyaev, V. F.; Gurevich, M. I.; Dudnikov, A. A.; Kuteev, B. V.; Pavlov, K. V.; Titarenko, Yu. E.; Titarenko, A. Yu.
2017-06-01
In this report the calculation accuracy requirements of the main parameters of the fusion neutron source, and the thermonuclear blankets with a DT fusion power of more than 10 MW, are formulated. To conduct the benchmark experiments the technical documentation and calculation models were developed for two blanket micro-models: the molten salt and the heavy water solid-state blankets. The calculations of the neutron spectra, and 37 dosimetric reaction rates that are widely used for the registration of thermal, resonance and threshold (0.25-13.45 MeV) neutrons, were performed for each blanket micro-model. The MCNP code and the neutron data library ENDF/B-VII were used for the calculations. All the calculations were performed for two kinds of neutron source: source I is the fusion source, source II is the source of neutrons generated by the 7Li target irradiated by protons with energy 24.6 MeV. The spectral indexes ratios were calculated to describe the spectrum variations from different neutron sources. The obtained results demonstrate the advantage of using the fusion neutron source in future experiments.
Bioadsorber efficiency, design, and performance forecasting for alachlor removal.
Badriyha, Badri N; Ravindran, Varadarajan; Den, Walter; Pirbazari, Massoud
2003-10-01
This study discusses a mathematical modeling and design protocol for bioactive granular activated carbon (GAC) adsorbers employed for purification of drinking water contaminated by chlorinated pesticides, exemplified by alachlor. A thin biofilm model is discussed that incorporates the following phenomenological aspects: film transfer from the bulk fluid to the adsorbent particles, diffusion through the biofilm immobilized on adsorbent surface, adsorption of the contaminant into the adsorbent particle. The modeling approach involved independent laboratory-scale experiments to determine the model input parameters. These experiments included adsorption isotherm studies, adsorption rate studies, and biokinetic studies. Bioactive expanded-bed adsorber experiments were conducted to obtain realistic experimental data for determining the ability of the model for predicting adsorber dynamics under different operating conditions. The model equations were solved using a computationally efficient hybrid numerical technique combining orthogonal collocation and finite difference methods. The model provided accurate predictions of adsorber dynamics for bioactive and non-bioactive scenarios. Sensitivity analyses demonstrated the significance of various model parameters, and focussed on enhancement in certain key parameters to improve the overall process efficiency. Scale-up simulation studies for bioactive and non-bioactive adsorbers provided comparisons between their performances, and illustrated the advantages of bioregeneration for enhancing their effective service life spans. Isolation of microbial species revealed that fungal strains were more efficient than bacterial strains in metabolizing alachlor. Microbial degradation pathways for alachlor were proposed and confirmed by the detection of biotransformation metabolites and byproducts using gas chromatography/mass spectrometry.
Goossens, Quentin; Leuridan, Steven; Henyš, Petr; Roosen, Jorg; Pastrav, Leonard; Mulier, Michiel; Desmet, Wim; Denis, Kathleen; Vander Sloten, Jos
2017-11-01
In cementless total hip arthroplasty (THA), the initial stability is obtained by press-fitting the implant in the bone to allow osseointegration for a long term secondary stability. However, finding the insertion endpoint that corresponds to a proper initial stability is currently based on the tactile and auditory experiences of the orthopedic surgeon, which can be challenging. This study presents a novel real-time method based on acoustic signals to monitor the acetabular implant fixation in cementless total hip arthroplasty. Twelve acoustic in vitro experiments were performed on three types of bone models; a simple bone block model, an artificial pelvic model and a cadaveric model. A custom made beam was screwed onto the implant which functioned as a sound enhancer and insertor. At each insertion step an acoustic measurement was performed. A significant acoustic resonance frequency shift was observed during the insertion process for the different bone models; 250 Hz (35%, second bending mode) to 180 Hz (13%, fourth bending mode) for the artificial bone block models and 120 Hz (11%, eighth bending mode) for the artificial pelvis model. No significant frequency shift was observed during the cadaveric experiment due to a lack of implant fixation in this model. This novel diagnostic method shows the potential of using acoustic signals to monitor the implant seating during insertion. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
A prospective earthquake forecast experiment for Japan
NASA Astrophysics Data System (ADS)
Yokoi, Sayoko; Nanjo, Kazuyoshi; Tsuruoka, Hiroshi; Hirata, Naoshi
2013-04-01
One major focus of the current Japanese earthquake prediction research program (2009-2013) is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. On 1 November in 2009, we started the 1st earthquake forecast testing experiment for the Japan area. We use the unified JMA catalogue compiled by the Japan Meteorological Agency as authorized catalogue. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called All Japan, Mainland, and Kanto. A total of 91 models were submitted to CSEP-Japan, and are evaluated with the CSEP official suite of tests about forecast performance. In this presentation, we show the results of the experiment of the 3-month testing class for 5 rounds. HIST-ETAS7pa, MARFS and RI10K models corresponding to the All Japan, Mainland and Kanto regions showed the best score based on the total log-likelihood. It is also clarified that time dependency of model parameters is no effective factor to pass the CSEP consistency tests for the 3-month testing class in all regions. Especially, spatial distribution in the All Japan region was too difficult to pass consistency test due to multiple events at a bin. Number of target events for a round in the Mainland region tended to be smaller than model's expectation during all rounds, which resulted in rejections of consistency test because of overestimation. In the Kanto region, pass ratios of consistency tests in each model showed more than 80%, which was associated with good balanced forecasting of event number and spatial distribution. Due to the multiple rounds of the experiment, we are now understanding the stability of models, robustness of model selection and earthquake predictability in each region beyond stochastic fluctuations of seismicity. We plan to use the results for design of 3 dimensional earthquake forecasting model in Kanto region, which is supported by the special project for reducing vulnerability for urban mega earthquake disasters from Ministy of Education, Culture, Sports and Technology of Japan.
Traffic model for advanced satellite designs and experiments for ISDN services
NASA Technical Reports Server (NTRS)
Pepin, Gerard R.; Hager, E. Paul
1991-01-01
The data base structure and fields for categorizing and storing Integrated Services Digital Network (ISDN) user characteristics is outlined. This traffic model data base will be used to exercise models of the ISDN Advanced Communication Satellite to determine design parameters and performance for the NASA Satellite Communications Applications Research (SCAR) Program.
Scattering Models and Basic Experiments in the Microwave Regime
NASA Technical Reports Server (NTRS)
Fung, A. K.; Blanchard, A. J. (Principal Investigator)
1985-01-01
The objectives of research over the next three years are: (1) to develop a randomly rough surface scattering model which is applicable over the entire frequency band; (2) to develop a computer simulation method and algorithm to simulate scattering from known randomly rough surfaces, Z(x,y); (3) to design and perform laboratory experiments to study geometric and physical target parameters of an inhomogeneous layer; (4) to develop scattering models for an inhomogeneous layer which accounts for near field interaction and multiple scattering in both the coherent and the incoherent scattering components; and (5) a comparison between theoretical models and measurements or numerical simulation.
Deformation behavior of HCP titanium alloy: Experiment and Crystal plasticity modeling
Wronski, M.; Arul Kumar, Mariyappan; Capolungo, Laurent; ...
2018-03-02
The deformation behavior of commercially pure titanium is studied using experiments and a crystal plasticity model. Compression tests along the rolling, transverse, and normal-directions, and tensile tests along the rolling and transverse directions are performed at room temperature to study the activation of slip and twinning in the hexagonal closed packed titanium. A detailed EBSD based statistical analysis of the microstructure is performed to develop statistics of both {10-12} tensile and {11-22} compression twins. A simple Monte Carlo (MC) twin variant selection criterion is proposed within the framework of the visco-plastic self-consistent (VPSC) model with a dislocation density (DD) basedmore » law used to describe dislocation hardening. In the model, plasticity is accommodated by prismatic, basal and pyramidal slip modes, and {10-12} tensile and {11-22} compression twinning modes. Thus, the VPSC-MC model successfully captures the experimentally observed activation of low Schmid factor twin variants for both tensile and compression twins modes. The model also predicts macroscopic stress-strain response, texture evolution and twin volume fraction that are in agreement with experimental observations.« less
Deformation behavior of HCP titanium alloy: Experiment and Crystal plasticity modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wronski, M.; Arul Kumar, Mariyappan; Capolungo, Laurent
The deformation behavior of commercially pure titanium is studied using experiments and a crystal plasticity model. Compression tests along the rolling, transverse, and normal-directions, and tensile tests along the rolling and transverse directions are performed at room temperature to study the activation of slip and twinning in the hexagonal closed packed titanium. A detailed EBSD based statistical analysis of the microstructure is performed to develop statistics of both {10-12} tensile and {11-22} compression twins. A simple Monte Carlo (MC) twin variant selection criterion is proposed within the framework of the visco-plastic self-consistent (VPSC) model with a dislocation density (DD) basedmore » law used to describe dislocation hardening. In the model, plasticity is accommodated by prismatic, basal and pyramidal slip modes, and {10-12} tensile and {11-22} compression twinning modes. Thus, the VPSC-MC model successfully captures the experimentally observed activation of low Schmid factor twin variants for both tensile and compression twins modes. The model also predicts macroscopic stress-strain response, texture evolution and twin volume fraction that are in agreement with experimental observations.« less
Statistical analyses of Higgs- and Z -portal dark matter models
NASA Astrophysics Data System (ADS)
Ellis, John; Fowlie, Andrew; Marzola, Luca; Raidal, Martti
2018-06-01
We perform frequentist and Bayesian statistical analyses of Higgs- and Z -portal models of dark matter particles with spin 0, 1 /2 , and 1. Our analyses incorporate data from direct detection and indirect detection experiments, as well as LHC searches for monojet and monophoton events, and we also analyze the potential impacts of future direct detection experiments. We find acceptable regions of the parameter spaces for Higgs-portal models with real scalar, neutral vector, Majorana, or Dirac fermion dark matter particles, and Z -portal models with Majorana or Dirac fermion dark matter particles. In many of these cases, there are interesting prospects for discovering dark matter particles in Higgs or Z decays, as well as dark matter particles weighing ≳100 GeV . Negative results from planned direct detection experiments would still allow acceptable regions for Higgs- and Z -portal models with Majorana or Dirac fermion dark matter particles.
Estimates of effects of residual acceleration on USML-1 experiments
NASA Technical Reports Server (NTRS)
Naumann, Robert J.
1995-01-01
The purpose of this study effort was to develop analytical models to describe the effects of residual accelerations on the experiments to be carried on the first U.S. Microgravity Lab mission (USML-1) and to test the accuracy of these models by comparing the pre-flight predicted effects with the post-flight measured effects. After surveying the experiments to be performed on USML-1, it became evident that the anticipated residual accelerations during the USML-1 mission were well below the threshold for most of the primary experiments and all of the secondary (Glovebox) experiments and that the only set of experiments that could provide quantifiable effects, and thus provide a definitive test of the analytical models, were the three melt growth experiments using the Bridgman-Stockbarger type Crystal Growth Furnace (CGF). This class of experiments is by far the most sensitive to low level quasi-steady accelerations that are unavoidable on space craft operating in low earth orbit. Because of this, they have been the drivers for the acceleration requirements imposed on the Space Station. Therefore, it is appropriate that the models on which these requirements are based are tested experimentally. Also, since solidification proceeds directionally over a long period of time, the solidified ingot provides a more or less continuous record of the effects from acceleration disturbances.
GAS eleven node thermal model (GEM)
NASA Technical Reports Server (NTRS)
Butler, Dan
1988-01-01
The Eleven Node Thermal Model (GEM) of the Get Away Special (GAS) container was originally developed based on the results of thermal tests of the GAS container. The model was then used in the thermal analysis and design of several NASA/GSFC GAS experiments, including the Flight Verification Payload, the Ultraviolet Experiment, and the Capillary Pumped Loop. The model description details the five cu ft container both with and without an insulated end cap. Mass specific heat values are also given so that transient analyses can be performed. A sample problem for each configuration is included as well so that GEM users can verify their computations. The model can be run on most personal computers with a thermal analyzer solution routine.
Icing Analysis of a Swept NACA 0012 Wing Using LEWICE3D Version 3.48
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.
2014-01-01
Icing calculations were performed for a NACA 0012 swept wing tip using LEWICE3D Version 3.48 coupled with the ANSYS CFX flow solver. The calculated ice shapes were compared to experimental data generated in the NASA Glenn Icing Research Tunnel (IRT). The IRT tests were designed to test the performance of the LEWICE3D ice void density model which was developed to improve the prediction of swept wing ice shapes. Icing tests were performed for a range of temperatures at two different droplet inertia parameters and two different sweep angles. The predicted mass agreed well with the experiment with an average difference of 12%. The LEWICE3D ice void density model under-predicted void density by an average of 30% for the large inertia parameter cases and by 63% for the small inertia parameter cases. This under-prediction in void density resulted in an over-prediction of ice area by an average of 115%. The LEWICE3D ice void density model produced a larger average area difference with experiment than the standard LEWICE density model, which doesn't account for the voids in the swept wing ice shape, (115% and 75% respectively) but it produced ice shapes which were deemed more appropriate because they were conservative (larger than experiment). Major contributors to the overly conservative ice shape predictions were deficiencies in the leading edge heat transfer and the sensitivity of the void ice density model to the particle inertia parameter. The scallop features present on the ice shapes were thought to generate interstitial flow and horse shoe vortices which enhance the leading edge heat transfer. A set of changes to improve the leading edge heat transfer and the void density model were tested. The changes improved the ice shape predictions considerably. More work needs to be done to evaluate the performance of these modifications for a wider range of geometries and icing conditions.
Icing Analysis of a Swept NACA 0012 Wing Using LEWICE3D Version 3.48
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.
2014-01-01
Icing calculations were performed for a NACA 0012 swept wing tip using LEWICE3D Version 3.48 coupled with the ANSYS CFX flow solver. The calculated ice shapes were compared to experimental data generated in the NASA Glenn Icing Research Tunnel (IRT). The IRT tests were designed to test the performance of the LEWICE3D ice void density model which was developed to improve the prediction of swept wing ice shapes. Icing tests were performed for a range of temperatures at two different droplet inertia parameters and two different sweep angles. The predicted mass agreed well with the experiment with an average difference of 12%. The LEWICE3D ice void density model under-predicted void density by an average of 30% for the large inertia parameter cases and by 63% for the small inertia parameter cases. This under-prediction in void density resulted in an over-prediction of ice area by an average of 115%. The LEWICE3D ice void density model produced a larger average area difference with experiment than the standard LEWICE density model, which doesn't account for the voids in the swept wing ice shape, (115% and 75% respectively) but it produced ice shapes which were deemed more appropriate because they were conservative (larger than experiment). Major contributors to the overly conservative ice shape predictions were deficiencies in the leading edge heat transfer and the sensitivity of the void ice density model to the particle inertia parameter. The scallop features present on the ice shapes were thought to generate interstitial flow and horse shoe vortices which enhance the leading edge heat transfer. A set of changes to improve the leading edge heat transfer and the void density model were tested. The changes improved the ice shape predictions considerably. More work needs to be done to evaluate the performance of these modifications for a wider range of geometries and icing conditions
Highly Enriched Uranium Metal Cylinders Surrounded by Various Reflector Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard Jones; J. Blair Briggs; Leland Monteirth
A series of experiments was performed at Los Alamos Scientific Laboratory in 1958 to determine critical masses of cylinders of Oralloy (Oy) reflected by a number of materials. The experiments were all performed on the Comet Universal Critical Assembly Machine, and consisted of discs of highly enriched uranium (93.3 wt.% 235U) reflected by half-inch and one-inch-thick cylindrical shells of various reflector materials. The experiments were performed by members of Group N-2, particularly K. W. Gallup, G. E. Hansen, H. C. Paxton, and R. H. White. This experiment was intended to ascertain critical masses for criticality safety purposes, as well asmore » to compare neutron transport cross sections to those obtained from danger coefficient measurements with the Topsy Oralloy-Tuballoy reflected and Godiva unreflected critical assemblies. The reflector materials examined in this series of experiments are as follows: magnesium, titanium, aluminum, graphite, mild steel, nickel, copper, cobalt, molybdenum, natural uranium, tungsten, beryllium, aluminum oxide, molybdenum carbide, and polythene (polyethylene). Also included are two special configurations of composite beryllium and iron reflectors. Analyses were performed in which uncertainty associated with six different parameters was evaluated; namely, extrapolation to the uranium critical mass, uranium density, 235U enrichment, reflector density, reflector thickness, and reflector impurities. In addition to the idealizations made by the experimenters (removal of the platen and diaphragm), two simplifications were also made to the benchmark models that resulted in a small bias and additional uncertainty. First of all, since impurities in core and reflector materials are only estimated, they are not included in the benchmark models. Secondly, the room, support structure, and other possible surrounding equipment were not included in the model. Bias values that result from these two simplifications were determined and associated uncertainty in the bias values were included in the overall uncertainty in benchmark keff values. Bias values were very small, ranging from 0.0004 ?k low to 0.0007 ?k low. Overall uncertainties range from ? 0.0018 to ? 0.0030. Major contributors to the overall uncertainty include uncertainty in the extrapolation to the uranium critical mass and the uranium density. Results are summarized in Figure 1. Figure 1. Experimental, Benchmark-Model, and MCNP/KENO Calculated Results The 32 configurations described and evaluated under ICSBEP Identifier HEU-MET-FAST-084 are judged to be acceptable for use as criticality safety benchmark experiments and should be valuable integral benchmarks for nuclear data testing of the various reflector materials. Details of the benchmark models, uncertainty analyses, and final results are given in this paper.« less
A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.
Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio
2017-11-01
Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.
Competency-Based Recordkeeping. Bulletin 1781.
ERIC Educational Resources Information Center
Nicholls State Univ., Thibodaux, LA.
This model instructional unit was developed to aid business teachers in Louisiana to prepare students in grades 10-12 for business and office employment, further educational experiences, and responsible consumer activity. It provides guidance on model performance objectives, current technology content, sources, and supplemental materials.…
Park, Sung Hwan; Lee, Ji Min; Kim, Jong Shik
2013-01-01
An irregular performance of a mechanical-type constant power regulator is considered. In order to find the cause of an irregular discharge flow at the cut-off pressure area, modeling and numerical simulations are performed to observe dynamic behavior of internal parts of the constant power regulator system for a swashplate-type axial piston pump. The commercial numerical simulation software AMESim is applied to model the mechanical-type regulator with hydraulic pump and simulate the performance of it. The validity of the simulation model of the constant power regulator system is verified by comparing simulation results with experiments. In order to find the cause of the irregular performance of the mechanical-type constant power regulator system, the behavior of main components such as the spool, sleeve, and counterbalance piston is investigated using computer simulation. The shape modification of the counterbalance piston is proposed to improve the undesirable performance of the mechanical-type constant power regulator. The performance improvement is verified by computer simulation using AMESim software.
The role of visual imagery in the retention of information from sentences.
Drose, G S; Allen, G L
1994-01-01
We conducted two experiments to evaluate a multiple-code model for sentence memory that posits both propositional and visual representational systems. Both sentences involved recognition memory. The results of Experiment 1 indicated that subjects' recognition memory for concrete sentences was superior to their recognition memory for abstract sentences. Instructions to use visual imagery to enhance recognition performance yielded no effects. Experiment 2 tested the prediction that interference by a visual task would differentially affect recognition memory for concrete sentences. Results showed the interference task to have had a detrimental effect on recognition memory for both concrete and abstract sentences. Overall, the evidence provided partial support for both a multiple-code model and a semantic integration model of sentence memory.
ERIC Educational Resources Information Center
Zimmermann, Judith; Brodersen, Kay H.; Heinimann, Hans R.; Buhmann, Joachim M.
2015-01-01
The graduate admissions process is crucial for controlling the quality of higher education, yet, rules-of-thumb and domain-specific experiences often dominate evidence-based approaches. The goal of the present study is to dissect the predictive power of undergraduate performance indicators and their aggregates. We analyze 81 variables in 171…
ERIC Educational Resources Information Center
Gosetti-Murrayjohn, Angela; Schneider, Federico
2009-01-01
This article provides a reflection on a team-teaching experience in which performative dialogues between co-instructors and among students provided a pedagogical framework within which comparative analysis of textual traditions within the classical tradition could be optimized. Performative dialogues thus provided a model for and enactment of…
DSN telemetry system performance with convolutionally code data
NASA Technical Reports Server (NTRS)
Mulhall, B. D. L.; Benjauthrit, B.; Greenhall, C. A.; Kuma, D. M.; Lam, J. K.; Wong, J. S.; Urech, J.; Vit, L. D.
1975-01-01
The results obtained to date and the plans for future experiments for the DSN telemetry system were presented. The performance of the DSN telemetry system in decoding convolutionally coded data by both sequential and maximum likelihood techniques is being determined by testing at various deep space stations. The evaluation of performance models is also an objective of this activity.
First Results of the Regional Earthquake Likelihood Models Experiment
NASA Astrophysics Data System (ADS)
Schorlemmer, Danijel; Zechar, J. Douglas; Werner, Maximilian J.; Field, Edward H.; Jackson, David D.; Jordan, Thomas H.
2010-08-01
The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predicting earthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of Earthquake Predictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment—a truly prospective earthquake prediction effort—is underway within the U.S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary—the forecasts were meant for an application of 5 years—we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one.
Toward Evolvable Hardware Chips: Experiments with a Programmable Transistor Array
NASA Technical Reports Server (NTRS)
Stoica, Adrian
1998-01-01
Evolvable Hardware is reconfigurable hardware that self-configures under the control of an evolutionary algorithm. We search for a hardware configuration can be performed using software models or, faster and more accurate, directly in reconfigurable hardware. Several experiments have demonstrated the possibility to automatically synthesize both digital and analog circuits. The paper introduces an approach to automated synthesis of CMOS circuits, based on evolution on a Programmable Transistor Array (PTA). The approach is illustrated with a software experiment showing evolutionary synthesis of a circuit with a desired DC characteristic. A hardware implementation of a test PTA chip is then described, and the same evolutionary experiment is performed on the chip demonstrating circuit synthesis/self-configuration directly in hardware.
It looks easy! Heuristics for combinatorial optimization problems.
Chronicle, Edward P; MacGregor, James N; Ormerod, Thomas C; Burr, Alistair
2006-04-01
Human performance on instances of computationally intractable optimization problems, such as the travelling salesperson problem (TSP), can be excellent. We have proposed a boundary-following heuristic to account for this finding. We report three experiments with TSPs where the capacity to employ this heuristic was varied. In Experiment 1, participants free to use the heuristic produced solutions significantly closer to optimal than did those prevented from doing so. Experiments 2 and 3 together replicated this finding in larger problems and demonstrated that a potential confound had no effect. In all three experiments, performance was closely matched by a boundary-following model. The results implicate global rather than purely local processes. Humans may have access to simple, perceptually based, heuristics that are suited to some combinatorial optimization tasks.
A new approach to electrophoresis in space
NASA Technical Reports Server (NTRS)
Snyder, Robert S.; Rhodes, Percy H.
1990-01-01
Previous electrophoresis experiments performed in space are reviewed. There is sufficient data available from the results of these experiments to show that they were designed with incomplete knowledge of the fluid dynamics of the process including electrohydrodynamics. Redesigning laboratory chambers and operating procedures developed on Earth for space without understanding both the advantages and disadvantages of the microgravity environment has yielded poor separations of both cells and proteins. However, electrophoreris is still an important separation tool in the laboratory and thermal convection does limit its performance. Thus, there is a justification for electrophoresis but the emphasis of future space experiments must be directed toward basic research with model experiments to understand the microgravity environment and fluid analysis to test the basic principles of the process.
pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling
NASA Astrophysics Data System (ADS)
Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.
2016-03-01
We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.
pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling
NASA Astrophysics Data System (ADS)
Wellmann, J. F.; Thiele, S. T.; Lindsay, M. D.; Jessell, M. W.
2015-11-01
We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.
Neural signatures of experience-based improvements in deterministic decision-making.
Tremel, Joshua J; Laurent, Patryk A; Wolk, David A; Wheeler, Mark E; Fiez, Julie A
2016-12-15
Feedback about our choices is a crucial part of how we gather information and learn from our environment. It provides key information about decision experiences that can be used to optimize future choices. However, our understanding of the processes through which feedback translates into improved decision-making is lacking. Using neuroimaging (fMRI) and cognitive models of decision-making and learning, we examined the influence of feedback on multiple aspects of decision processes across learning. Subjects learned correct choices to a set of 50 word pairs across eight repetitions of a concurrent discrimination task. Behavioral measures were then analyzed with both a drift-diffusion model and a reinforcement learning model. Parameter values from each were then used as fMRI regressors to identify regions whose activity fluctuates with specific cognitive processes described by the models. The patterns of intersecting neural effects across models support two main inferences about the influence of feedback on decision-making. First, frontal, anterior insular, fusiform, and caudate nucleus regions behave like performance monitors, reflecting errors in performance predictions that signal the need for changes in control over decision-making. Second, temporoparietal, supplementary motor, and putamen regions behave like mnemonic storage sites, reflecting differences in learned item values that inform optimal decision choices. As information about optimal choices is accrued, these neural systems dynamically adjust, likely shifting the burden of decision processing from controlled performance monitoring to bottom-up, stimulus-driven choice selection. Collectively, the results provide a detailed perspective on the fundamental ability to use past experiences to improve future decisions. Copyright © 2016 Elsevier B.V. All rights reserved.
Neural signatures of experience-based improvements in deterministic decision-making
Tremel, Joshua J.; Laurent, Patryk A.; Wolk, David A.; Wheeler, Mark E.; Fiez, Julie A.
2016-01-01
Feedback about our choices is a crucial part of how we gather information and learn from our environment. It provides key information about decision experiences that can be used to optimize future choices. However, our understanding of the processes through which feedback translates into improved decision-making is lacking. Using neuroimaging (fMRI) and cognitive models of decision-making and learning, we examined the influence of feedback on multiple aspects of decision processes across learning. Subjects learned correct choices to a set of 50 word pairs across eight repetitions of a concurrent discrimination task. Behavioral measures were then analyzed with both a drift-diffusion model and a reinforcement learning model. Parameter values from each were then used as fMRI regressors to identify regions whose activity fluctuates with specific cognitive processes described by the models. The patterns of intersecting neural effects across models support two main inferences about the influence of feedback on decision-making. First, frontal, anterior insular, fusiform, and caudate nucleus regions behave like performance monitors, reflecting errors in performance predictions that signal the need for changes in control over decision-making. Second, temporoparietal, supplementary motor, and putamen regions behave like mnemonic storage sites, reflecting differences in learned item values that inform optimal decision choices. As information about optimal choices is accrued, these neural systems dynamically adjust, likely shifting the burden of decision processing from controlled performance monitoring to bottom-up, stimulus-driven choice selection. Collectively, the results provide a detailed perspective on the fundamental ability to use past experiences to improve future decisions. PMID:27523644
A Single Column Model Ensemble Approach Applied to the TWP-ICE Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, Laura; Jakob, Christian; Cheung, K.
2013-06-27
Single column models (SCM) are useful testbeds for investigating the parameterisation schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best-estimate large-scale data prescribed. One method to address this uncertainty is to perform ensemble simulations of the SCM. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best-estimate product. This data is then used to carry out simulations with 11 SCM and 2 cloud-resolving models (CRM). Best-estimatemore » simulations are also performed. All models show that moisture related variables are close to observations and there are limited differences between the best-estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the moisture budget between the SCM and CRM. Systematic differences are also apparent in the ensemble mean vertical structure of cloud variables. The ensemble is further used to investigate relations between cloud variables and precipitation identifying large differences between CRM and SCM. This study highlights that additional information can be gained by performing ensemble simulations enhancing the information derived from models using the more traditional single best-estimate simulation.« less
Element Material Exposure Experiment by EFFU
NASA Technical Reports Server (NTRS)
Hashimoto, Yoshihiro; Ito, Masaaki; Ishii, Masahiro
1992-01-01
The National Space Development Agency of Japan (NASDA) is planning to perform an 'Element Material Exposure Experiment' using the Exposed Facility Flyer Unit (EFFU). This paper presents an initial design of experiments proposed for this project by our company. The EFFU is installed on the Space Flyer Unit (SFU) as a partial model of the Space Station JEM exposed facility. The SFU is scheduled to be launched by H-2 rocket in January or February of 1994, then various tests will be performed for three months, on orbit of 500 km altitude, and it will be retrieved by the U.S. Space Shuttle and returned to the ground. The mission sequence is shown.
NASA Astrophysics Data System (ADS)
Javernick, L.; Bertoldi, W.; Redolfi, M.
2017-12-01
Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical agreement simulations suffered to represent braiding planforms (evolving toward meandering), and parameterization that ensured braided produced exaggerated activation and bank erosion rates. Marie Sklodowska-Curie Individual Fellowship: River-HMV, 656917
Precision measurements of the RSA method using a phantom model of hip prosthesis.
Mäkinen, Tatu J; Koort, Jyri K; Mattila, Kimmo T; Aro, Hannu T
2004-04-01
Radiostereometric analysis (RSA) has become one of the recommended techniques for pre-market evaluation of new joint implant designs. In this study we evaluated the effect of repositioning of X-ray tubes and phantom model on the precision of the RSA method. In precision measurements, we utilized mean error of rigid body fitting (ME) values as an internal control for examinations. ME value characterizes relative motion among the markers within each rigid body and is conventionally used to detect loosening of a bone marker. Three experiments, each consisting of 10 double examinations, were performed. In the first experiment, the X-ray tubes and the phantom model were not repositioned between one double examination. In experiments two and three, the X-ray tubes were repositioned between one double examination. In addition, the position of the phantom model was changed in experiment three. Results showed that significant differences could be found in 2 of 12 comparisons when evaluating the translation and rotation of the prosthetic components. Repositioning procedures increased ME values mimicking deformation of rigid body segments. Thus, ME value seemed to be a more sensitive parameter than migration values in this study design. These results confirmed the importance of standardized radiographic technique and accurate patient positioning for RSA measurements. Standardization and calibration procedures should be performed with phantom models in order to avoid unnecessary radiation dose of the patients. The present model gives the means to establish and to follow the intra-laboratory precision of the RSA method. The model is easily applicable in any research unit and allows the comparison of the precision values in different laboratories of multi-center trials.
Comparison of retention models for polymers 1. Poly(ethylene glycol)s.
Bashir, Mubasher A; Radke, Wolfgang
2006-10-27
The suitability of three different retention models to predict the retention times of poly(ethylene glycol)s (PEGs) in gradient and isocratic chromatography was investigated. The models investigated were the linear (LSSM) and the quadratic solvent strength model (QSSM). In addition, a model describing the retention behaviour of polymers was extended to account for gradient elution (PM). It was found that all models are suited to properly predict gradient retention volumes provided the extraction of the analyte specific parameters is performed from gradient experiments as well. The LSSM and QSSM on principle cannot describe retention behaviour under critical or SEC conditions. Since the PM is designed to cover all three modes of polymer chromatography, it is therefore superior to the other models. However, the determination of the analyte specific parameters, which are needed to calibrate the retention behaviour, strongly depend on the suitable selection of initial experiments. A useful strategy for a purposeful selection of these calibration experiments is proposed.
Aquifer storage and recovery: recent hydrogeological advances and system performance.
Maliva, Robert G; Guo, Weixing; Missimer, Thomas M
2006-12-01
Aquifer storage and recovery (ASR) is part of the solution to the global problem of managing water resources to meet existing and future freshwater demands. However, the metaphoric "ASR bubble" has been burst with the realization that ASR systems are more physically and chemically complex than the general conceptualization. Aquifer heterogeneity and fluid-rock interactions can greatly affect ASR system performance. The results of modeling studies and field experiences indicate that more sophisticated data collection and solute-transport modeling are required to predict how stored water will migrate in heterogeneous aquifers and how fluid-rock interactions will affect the quality of stored water. It has been well-demonstrated, by historic experience, that ASR systems can provide very large volumes of storage at a lesser cost than other options. The challenges moving forward are to improve the success rate of ASR systems, optimize system performance, and set expectations appropriately.
The value of information in a multi-agent market model. The luck of the uninformed
NASA Astrophysics Data System (ADS)
Tóth, B.; Scalas, E.; Huber, J.; Kirchler, M.
2007-01-01
We present an experimental and simulated model of a multi-agent stock market driven by a double auction order matching mechanism. Studying the effect of cumulative information on the performance of traders, we find a non monotonic relationship of net returns of traders as a function of information levels, both in the experiments and in the simulations. Particularly, averagely informed traders perform worse than the non informed and only traders with high levels of information (insiders) are able to beat the market. The simulations and the experiments reproduce many stylized facts of tick-by-tick stock-exchange data, such as fast decay of autocorrelation of returns, volatility clustering and fat-tailed distribution of returns. These results have an important message for everyday life. They can give a possible explanation why, on average, professional fund managers perform worse than the market index.
BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations
NASA Astrophysics Data System (ADS)
Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I.; Strydis, Christos
2017-12-01
Objective. The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. Approach. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload’s performance characteristics. Main results. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. Significance. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.
BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations.
Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I; Strydis, Christos
2017-12-01
The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload's performance characteristics. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.
Modelling of LOCA Tests with the BISON Fuel Performance Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, Richard L; Pastore, Giovanni; Novascone, Stephen Rhead
2016-05-01
BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculationsmore » are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.« less
Human activity discrimination for maritime application
NASA Astrophysics Data System (ADS)
Boettcher, Evelyn; Deaver, Dawne M.; Krapels, Keith
2008-04-01
The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is investigating how motion affects the target acquisition model (NVThermIP) sensor performance estimates. This paper looks specifically at estimating sensor performance for the task of discriminating human activities on watercraft, and was sponsored by the Office of Naval Research (ONR). Traditionally, sensor models were calibrated using still images. While that approach is sufficient for static targets, video allows one to use motion cues to aid in discerning the type of human activity more quickly and accurately. This, in turn, will affect estimated sensor performance and these effects are measured in order to calibrate current target acquisition models for this task. The study employed an eleven alternative forced choice (11AFC) human perception experiment to measure the task difficulty of discriminating unique human activities on watercrafts. A mid-wave infrared camera was used to collect video at night. A description of the construction of this experiment is given, including: the data collection, image processing, perception testing and how contrast was defined for video. These results are applicable to evaluate sensor field performance for Anti-Terrorism and Force Protection (AT/FP) tasks for the U.S. Navy.