NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
First-Order Frameworks for Managing Models in Engineering Optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natlia M.; Lewis, Robert Michael
2000-01-01
Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.
A multi-fidelity framework for physics based rotor blade simulation and optimization
NASA Astrophysics Data System (ADS)
Collins, Kyle Brian
New helicopter rotor designs are desired that offer increased efficiency, reduced vibration, and reduced noise. Rotor Designers in industry need methods that allow them to use the most accurate simulation tools available to search for these optimal designs. Computer based rotor analysis and optimization have been advanced by the development of industry standard codes known as "comprehensive" rotorcraft analysis tools. These tools typically use table look-up aerodynamics, simplified inflow models and perform aeroelastic analysis using Computational Structural Dynamics (CSD). Due to the simplified aerodynamics, most design studies are performed varying structural related design variables like sectional mass and stiffness. The optimization of shape related variables in forward flight using these tools is complicated and results are viewed with skepticism because rotor blade loads are not accurately predicted. The most accurate methods of rotor simulation utilize Computational Fluid Dynamics (CFD) but have historically been considered too computationally intensive to be used in computer based optimization, where numerous simulations are required. An approach is needed where high fidelity CFD rotor analysis can be utilized in a shape variable optimization problem with multiple objectives. Any approach should be capable of working in forward flight in addition to hover. An alternative is proposed and founded on the idea that efficient hybrid CFD methods of rotor analysis are ready to be used in preliminary design. In addition, the proposed approach recognizes the usefulness of lower fidelity physics based analysis and surrogate modeling. Together, they are used with high fidelity analysis in an intelligent process of surrogate model building of parameters in the high fidelity domain. Closing the loop between high and low fidelity analysis is a key aspect of the proposed approach. This is done by using information from higher fidelity analysis to improve predictions made with lower fidelity models. This thesis documents the development of automated low and high fidelity physics based rotor simulation frameworks. The low fidelity framework uses a comprehensive code with simplified aerodynamics. The high fidelity model uses a parallel processor capable CFD/CSD methodology. Both low and high fidelity frameworks include an aeroacoustic simulation for prediction of noise. A synergistic process is developed that uses both the low and high fidelity frameworks together to build approximate models of important high fidelity metrics as functions of certain design variables. To test the process, a 4-bladed hingeless rotor model is used as a baseline. The design variables investigated include tip geometry and spanwise twist distribution. Approximation models are built for metrics related to rotor efficiency and vibration using the results from 60+ high fidelity (CFD/CSD) experiments and 400+ low fidelity experiments. Optimization using the approximation models found the Pareto Frontier anchor points, or the design having maximum rotor efficiency and the design having minimum vibration. Various Pareto generation methods are used to find designs on the frontier between these two anchor designs. When tested in the high fidelity framework, the Pareto anchor designs are shown to be very good designs when compared with other designs from the high fidelity database. This provides evidence that the process proposed has merit. Ultimately, this process can be utilized by industry rotor designers with their existing tools to bring high fidelity analysis into the preliminary design stage of rotors. In conclusion, the methods developed and documented in this thesis have made several novel contributions. First, an automated high fidelity CFD based forward flight simulation framework has been built for use in preliminary design optimization. The framework was built around an integrated, parallel processor capable CFD/CSD/AA process. Second, a novel method of building approximate models of high fidelity parameters has been developed. The method uses a combination of low and high fidelity results and combines Design of Experiments, statistical effects analysis, and aspects of approximation model management. And third, the determination of rotor blade shape variables through optimization using CFD based analysis in forward flight has been performed. This was done using the high fidelity CFD/CSD/AA framework and method mentioned above. While the low and high fidelity predictions methods used in the work still have inaccuracies that can affect the absolute levels of the results, a framework has been successfully developed and demonstrated that allows for an efficient process to improve rotor blade designs in terms of a selected choice of objective function(s). Using engineering judgment, this methodology could be applied today to investigate opportunities to improve existing designs. With improvements in the low and high fidelity prediction components that will certainly occur, this framework could become a powerful tool for future rotorcraft design work. (Abstract shortened by UMI.)
Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling.
Perdikaris, P; Raissi, M; Damianou, A; Lawrence, N D; Karniadakis, G E
2017-02-01
Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.
NASA Astrophysics Data System (ADS)
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
NASA Astrophysics Data System (ADS)
Takemiya, Tetsushi
In modern aerospace engineering, the physics-based computational design method is becoming more important, as it is more efficient than experiments and because it is more suitable in designing new types of aircraft (e.g., unmanned aerial vehicles or supersonic business jets) than the conventional design method, which heavily relies on historical data. To enhance the reliability of the physics-based computational design method, researchers have made tremendous efforts to improve the fidelity of models. However, high-fidelity models require longer computational time, so the advantage of efficiency is partially lost. This problem has been overcome with the development of variable fidelity optimization (VFO). In VFO, different fidelity models are simultaneously employed in order to improve the speed and the accuracy of convergence in an optimization process. Among the various types of VFO methods, one of the most promising methods is the approximation management framework (AMF). In the AMF, objective and constraint functions of a low-fidelity model are scaled at a design point so that the scaled functions, which are referred to as "surrogate functions," match those of a high-fidelity model. Since scaling functions and the low-fidelity model constitutes surrogate functions, evaluating the surrogate functions is faster than evaluating the high-fidelity model. Therefore, in the optimization process, in which gradient-based optimization is implemented and thus many function calls are required, the surrogate functions are used instead of the high-fidelity model to obtain a new design point. The best feature of the AMF is that it may converge to a local optimum of the high-fidelity model in much less computational time than the high-fidelity model. However, through literature surveys and implementations of the AMF, the author xx found that (1) the AMF is very vulnerable when the computational analysis models have numerical noise, which is very common in high-fidelity models, and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite differentiation (FD) method, and then, the Robust AMF is implemented along with the sequential quadratic programming (SQP) optimization method with only high-fidelity models. The proposed AD method computes derivatives more accurately and faster than the FD method, and the Robust AMF successfully optimizes shapes of the airfoil and the wing in a much shorter time than SQP with only high-fidelity models. These results clearly show the effectiveness of the Robust AMF. Finally, the feasibility of reducing computational time for calculating derivatives and the necessity of AMF with an optimum design point always in the feasible region are discussed as future work.
Autonomous Aerobraking: Thermal Analysis and Response Surface Development
NASA Technical Reports Server (NTRS)
Dec, John A.; Thornblom, Mark N.
2011-01-01
A high-fidelity thermal model of the Mars Reconnaissance Orbiter was developed for use in an autonomous aerobraking simulation study. Response surface equations were derived from the high-fidelity thermal model and integrated into the autonomous aerobraking simulation software. The high-fidelity thermal model was developed using the Thermal Desktop software and used in all phases of the analysis. The use of Thermal Desktop exclusively, represented a change from previously developed aerobraking thermal analysis methodologies. Comparisons were made between the Thermal Desktop solutions and those developed for the previous aerobraking thermal analyses performed on the Mars Reconnaissance Orbiter during aerobraking operations. A variable sensitivity screening study was performed to reduce the number of variables carried in the response surface equations. Thermal analysis and response surface equation development were performed for autonomous aerobraking missions at Mars and Venus.
NASA Astrophysics Data System (ADS)
Horton, Scott
This research study investigated the effects of high fidelity graphics on both learning and presence, or the "sense of being there," inside a Virtual Learning Environment (VLE). Four versions of a VLE on the subject of the element mercury were created, each with a different combination of high and low fidelity polygon models and high and low fidelity shaders. A total of 76 college age (18+ years of age) participants were randomly assigned to one of the four conditions. The participants interacted with the VLE and then completed several posttest measures on learning, presence, and attitudes towards the VLE experience. Demographic information was also collected, including age, computer gameplay experience, number of virtual environments interacted with, gender and time spent in this virtual environment. The data was analyzed as a 2 x 2 between subjects ANOVA. The main effects of shader fidelity and polygon fidelity were both non-significant for both learning and all presence subscales inside the VLE. In addition, there was no significant interaction between shader fidelity and model fidelity. However, there were two significant results on the supplementary variables. First, gender was found to have a significant main effect on all the presence subscales. Females reported higher average levels of presence than their male counterparts. Second, gameplay hours, or the number of hours a participant played computer games per week, also had a significant main effect on participant score on the learning measure. The participants who reported playing 15+ hours of computer games per week, the highest amount of time in the variable, had the highest score as a group on the mercury learning measure while those participants that played 1-5 hours per week had the lowest scores.
2008-03-01
multiplicative corrections as well as space mapping transformations for models defined over a lower dimensional space. A corrected surrogate model for the...correction functions used in [72]. If the low fidelity model g(x̃) is defined over a lower dimensional space then a space mapping transformation is...required. As defined in [21, 72], space mapping is a method of mapping between models of different dimensionality or fidelity. Let P denote the space
NASA Astrophysics Data System (ADS)
Sinsbeck, Michael; Tartakovsky, Daniel
2015-04-01
Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.
Measuring trainer fidelity in the transfer of suicide prevention training
Cross, Wendi F.; Pisani, Anthony R.; Schmeelk-Cone, Karen; Xia, Yinglin; Tu, Xin; McMahon, Marcie; Munfakh, Jimmie Lou; Gould, Madelyn S.
2014-01-01
Background Finding effective and efficient models to train large numbers of suicide prevention interventionists, including ‘hotline’ crisis counselors, is a high priority. Train-the-trainer (TTT) models are widely used but understudied. Aims To assess the extent to which trainers following TTT delivered the Applied Suicide Intervention Skills Training (ASIST) program with fidelity, and to examine fidelity across two trainings and seven training segments. Methods We recorded and reliably rated trainer fidelity, defined as adherence to program content and competence of program delivery, for 34 newly trained ASIST trainers delivering the program to crisis center staff on two separate occasions. A total of 324 observations were coded. Trainer demographics were also collected. Results On average, trainers delivered two-thirds of the program. Previous training was associated with lower levels of trainer adherence to the program. 18% of trainers' observations were rated as solidly competent. Trainers did not improve fidelity from their first to second training. Significantly higher fidelity was found for lectures and lower fidelity was found for interactive training activities including asking about suicide and creating a safe plan. Conclusions We found wide variability in trainer fidelity to the ASIST program following TTT and few trainers had high levels of both adherence and competence. More research is needed to examine the cost-effectiveness of TTT models. PMID:24901061
Multi-fidelity machine learning models for accurate bandgap predictions of solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab
Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less
Multi-fidelity machine learning models for accurate bandgap predictions of solids
Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab
2016-12-28
Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less
Integration of Engine, Plume, and CFD Analyses in Conceptual Design of Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Li, Wu; Campbell, Richard; Geiselhart, Karl; Shields, Elwood; Nayani, Sudheer; Shenoy, Rajiv
2009-01-01
This paper documents an integration of engine, plume, and computational fluid dynamics (CFD) analyses in the conceptual design of low-boom supersonic aircraft, using a variable fidelity approach. In particular, the Numerical Propulsion Simulation System (NPSS) is used for propulsion system cycle analysis and nacelle outer mold line definition, and a low-fidelity plume model is developed for plume shape prediction based on NPSS engine data and nacelle geometry. This model provides a capability for the conceptual design of low-boom supersonic aircraft that accounts for plume effects. Then a newly developed process for automated CFD analysis is presented for CFD-based plume and boom analyses of the conceptual geometry. Five test cases are used to demonstrate the integrated engine, plume, and CFD analysis process based on a variable fidelity approach, as well as the feasibility of the automated CFD plume and boom analysis capability.
Diagnosing the impact of alternative calibration strategies on coupled hydrologic models
NASA Astrophysics Data System (ADS)
Smith, T. J.; Perera, C.; Corrigan, C.
2017-12-01
Hydrologic models represent a significant tool for understanding, predicting, and responding to the impacts of water on society and society on water resources and, as such, are used extensively in water resources planning and management. Given this important role, the validity and fidelity of hydrologic models is imperative. While extensive focus has been paid to improving hydrologic models through better process representation, better parameter estimation, and better uncertainty quantification, significant challenges remain. In this study, we explore a number of competing model calibration scenarios for simple, coupled snowmelt-runoff models to better understand the sensitivity / variability of parameterizations and its impact on model performance, robustness, fidelity, and transferability. Our analysis highlights the sensitivity of coupled snowmelt-runoff model parameterizations to alterations in calibration approach, underscores the concept of information content in hydrologic modeling, and provides insight into potential strategies for improving model robustness / fidelity.
Development of a measure of model fidelity for mental health Crisis Resolution Teams.
Lloyd-Evans, Brynmor; Bond, Gary R; Ruud, Torleif; Ivanecka, Ada; Gray, Richard; Osborn, David; Nolan, Fiona; Henderson, Claire; Mason, Oliver; Goater, Nicky; Kelly, Kathleen; Ambler, Gareth; Morant, Nicola; Onyett, Steve; Lamb, Danielle; Fahmy, Sarah; Brown, Ellie; Paterson, Beth; Sweeney, Angela; Hindle, David; Fullarton, Kate; Frerichs, Johanna; Johnson, Sonia
2016-12-01
Crisis Resolution Teams (CRTs) provide short-term intensive home treatment to people experiencing mental health crisis. Trial evidence suggests CRTs can be effective at reducing hospital admissions and increasing satisfaction with acute care. When scaled up to national level however, CRT implementation and outcomes have been variable. We aimed to develop and test a fidelity scale to assess adherence to a model of best practice for CRTs, based on best available evidence. A concept mapping process was used to develop a CRT fidelity scale. Participants (n = 68) from a range of stakeholder groups prioritised and grouped statements (n = 72) about important components of the CRT model, generated from a literature review, national survey and qualitative interviews. These data were analysed using Ariadne software and the resultant cluster solution informed item selection for a CRT fidelity scale. Operational criteria and scoring anchor points were developed for each item. The CORE CRT fidelity scale was then piloted in 75 CRTs in the UK to assess the range of scores achieved and feasibility for use in a 1-day fidelity review process. Trained reviewers (n = 16) rated CRT service fidelity in a vignette exercise to test the scale's inter-rater reliability. There were high levels of agreement within and between stakeholder groups regarding the most important components of the CRT model. A 39-item measure of CRT model fidelity was developed. Piloting indicated that the scale was feasible for use to assess CRT model fidelity and had good face validity. The wide range of item scores and total scores across CRT services in the pilot demonstrate the measure can distinguish lower and higher fidelity services. Moderately good inter-rater reliability was found, with an estimated correlation between individual ratings of 0.65 (95% CI: 0.54 to 0.76). The CORE CRT Fidelity Scale has been developed through a rigorous and systematic process. Promising initial testing indicates its value in assessing adherence to a model of CRT best practice and to support service improvement monitoring and planning. Further research is required to establish its psychometric properties and international applicability.
NASA Astrophysics Data System (ADS)
Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan
2016-04-01
A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.
Creation of a Rapid High-Fidelity Aerodynamics Module for a Multidisciplinary Design Environment
NASA Technical Reports Server (NTRS)
Srinivasan, Muktha; Whittecar, William; Edwards, Stephen; Mavris, Dimitri N.
2012-01-01
In the traditional aerospace vehicle design process, each successive design phase is accompanied by an increment in the modeling fidelity of the disciplinary analyses being performed. This trend follows a corresponding shrinking of the design space as more and more design decisions are locked in. The correlated increase in knowledge about the design and decrease in design freedom occurs partly because increases in modeling fidelity are usually accompanied by significant increases in the computational expense of performing the analyses. When running high fidelity analyses, it is not usually feasible to explore a large number of variations, and so design space exploration is reserved for conceptual design, and higher fidelity analyses are run only once a specific point design has been selected to carry forward. The designs produced by this traditional process have been recognized as being limited by the uncertainty that is present early on due to the use of lower fidelity analyses. For example, uncertainty in aerodynamics predictions produces uncertainty in trajectory optimization, which can impact overall vehicle sizing. This effect can become more significant when trajectories are being shaped by active constraints. For example, if an optimal trajectory is running up against a normal load factor constraint, inaccuracies in the aerodynamic coefficient predictions can cause a feasible trajectory to be considered infeasible, or vice versa. For this reason, a trade must always be performed between the desired fidelity and the resources available. Apart from this trade between fidelity and computational expense, it is very desirable to use higher fidelity analyses earlier in the design process. A large body of work has been performed to this end, led by efforts in the area of surrogate modeling. In surrogate modeling, an up-front investment is made by running a high fidelity code over a Design of Experiments (DOE); once completed, the DOE data is used to create a surrogate model, which captures the relationships between input variables and responses into regression equations. Depending on the dimensionality of the problem and the fidelity of the code for which a surrogate model is being created, the initial DOE can itself be computationally prohibitive to run. Cokriging, a modeling approach from the field of geostatistics, provides a desirable compromise between computational expense and fidelity. To do this, cokriging leverages a large body of data generated by a low fidelity analysis, combines it with a smaller set of data from a higher fidelity analysis, and creates a kriging surrogate model with prediction fidelity approaching that of the higher fidelity analysis. When integrated into a multidisciplinary environment, a disciplinary analysis module employing cokriging can raise the analysis fidelity without drastically impacting the expense of design iterations. This is demonstrated through the creation of an aerodynamics analysis module in NASA s OpenMDAO framework. Aerodynamic analyses including Missile DATCOM, APAS, and USM3D are leveraged to create high fidelity aerodynamics decks for parametric vehicle geometries, which are created in NASA s Vehicle Sketch Pad (VSP). Several trade studies are performed to examine the achieved level of model fidelity, and the overall impact to vehicle design is quantified.
Sustaining Fidelity Following the Nationwide PMTO™ Implementation in Norway
Forgatch, Marion S.; DeGarmo, David S.
2011-01-01
This report describes three studies from the nationwide Norwegian implementation of Parent Management Training – Oregon Model (PMTO™), an empirically supported treatment for families of children with behavior problems (Forgatch and Patterson 2010). Separate stages of the implementation were evaluated using a fidelity measure based on direct observation of intervention sessions. Study 1 assessed growth in fidelity observed early, mid, and late in the training of a group of practitioners. We hypothesized increased fidelity and decreased variability in practice. Study 2 evaluated method fidelity over the course of three generations of practitioners trained in PMTO. Generation 1 (G1) was trained by the PMTO developer/purveyors; Generation 2 (G2) was trained by selected G1 Norwegian trainers; and Generation 3 (G3) was trained by G1 and G2 trainers. We hypothesized decrease in fidelity with each generation. Study 3 tested the predictive validity of fidelity in a cross-cultural replication, hypothesizing that higher fidelity scores would correlate with improved parenting practices observed in parent-child interactions before and after treatment. In Study 1, trainees' performance improved and became more homogeneous as predicted. In Study 2, a small decline in fidelity followed the transfer from the purveyor trainers to Norwegian trainers in G2, but G3 scores were equivalent to those attained by G1. Thus, the hypothesis was not fully supported. Finally, the FIMP validity model replicated; PMTO fidelity significantly contributed to improvements in parenting practices from pre- to post-treatment. The data indicate that PMTO was transferred successfully to Norwegian implementation with sustained fidelity and cross-cultural generalization. PMID:21671090
NASA Astrophysics Data System (ADS)
Baranowski, D.; Waliser, D. E.; Jiang, X.
2016-12-01
One of the key challenges in subseasonal weather forecasting is the fidelity in representing the propagation of the Madden-Julian Oscillation (MJO) across the Maritime Continent (MC). In reality both propagating and non-propagating MJO events are observed, but in numerical forecast the latter group largely dominates. For this study, comprehensive model performances are evaluated using metrics that utilize the mean precipitation pattern and the amplitude and phase of the diurnal cycle, with a particular focus on the linkage between a model's local MC variability and its fidelity in representing propagation of the MJO and equatorial Kelvin waves across the MC. Subseasonal to seasonal variability of mean precipitation and its diurnal cycle in 20 year long climate simulations from over 20 general circulation models (GCMs) is examined to benchmark model performance. Our results show that many models struggle to represent the precipitation pattern over complex Maritime Continent terrain. Many models show negative biases of mean precipitation and amplitude of its diurnal cycle; these biases are often larger over land than over ocean. Furthermore, only a handful of models realistically represent the spatial variability of the phase of the diurnal cycle of precipitation. Models tend to correctly simulate the timing of the diurnal maximum of precipitation over ocean during local solar time morning, but fail to acknowledge influence of the land, with the timing of the maximum of precipitation there occurring, unrealistically, at the same time as over ocean. The day-to-day and seasonal variability of the mean precipitation follows observed patterns, but is often unrealistic for the diurnal cycle amplitude. The intraseasonal variability of the amplitude of the diurnal cycle of precipitation is mainly driven by model's ability (or lack of) to produce eastward propagating MJO-like signal. Our results show that many models tend to decrease apparent air-sea contrast in the mean precipitation and diurnal cycle of precipitation patterns over the Maritime Continent. As a result, the complexity of those patterns is heavily smoothed, to such an extent in some models that the Maritime Continent features and imprint is almost unrecognizable relative to the eastern Indian Ocean or Western Pacific.
Are revised models better models? A skill score assessment of regional interannual variability
NASA Astrophysics Data System (ADS)
Sperber, Kenneth R.; Participating AMIP Modelling Groups
1999-05-01
Various skill scores are used to assess the performance of revised models relative to their original configurations. The interannual variability of all-India, Sahel and Nordeste rainfall and summer monsoon windshear is examined in integrations performed under the experimental design of the Atmospheric Model Intercomparison Project. For the indices considered, the revised models exhibit greater fidelity at simulating the observed interannual variability. Interannual variability of all-India rainfall is better simulated by models that have a more realistic rainfall climatology in the vicinity of India, indicating the beneficial effect of reducing systematic model error.
Are revised models better models? A skill score assessment of regional interannual variability
NASA Astrophysics Data System (ADS)
Participating AMIP Modelling Groups,; Sperber, Kenneth R.
Various skill scores are used to assess the performance of revised models relative to their original configurations. The interannual variability of all-India, Sahel and Nordeste rainfall and summer monsoon windshear is examined in integrations performed under the experimental design of the Atmospheric Model Intercomparison Project. For the indices considered, the revised models exhibit greater fidelity at simulating the observed interannual variability. Interannual variability of all-India rainfall is better simulated by models that have a more realistic rainfall climatology in the vicinity of India, indicating the beneficial effect of reducing systematic model error.
Neural correlates of individual differences in manual imitation fidelity
Braadbaart, Lieke; Waiter, Gordon D.; Williams, Justin H. G.
2012-01-01
Imitation is crucial for social learning, and so it is important to identify what determines between-subject variability in imitation fidelity. This might help explain what makes some people, like those with social difficulties such as in autism spectrum disorder (ASD), significantly worse at performance on these tasks than others. A novel paradigm was developed to provide objective measures of imitation fidelity in which participants used a touchscreen to imitate videos of a model drawing different shapes. Comparisons between model and participants' kinematic data provided three measures of imitative fidelity. We hypothesized that imitative ability would predict variation in BOLD signal whilst performing a simple imitation task in the MRI-scanner. In particular, an overall measure of accuracy (correlation between model and imitator) would predict activity in the overarching imitation system, whereas bias would be subject to more general aspects of motor control. Participants lying in the MRI-scanner were instructed to imitate different grips on a handle, or to watch someone or a circle moving the handle. Our hypothesis was partly confirmed as correlation between model and imitator was mediated by somatosensory cortex but also ventromedial prefrontal cortex, and bias was mediated mainly by cerebellum but also by the medial frontal and parietal cortices and insula. We suggest that this variance differentially reflects cognitive functions such as feedback-sensitivity and reward-dependent learning, contributing significantly to variability in individuals' imitative abilities as characterized by objective kinematic measures. PMID:23087625
On Multifunctional Collaborative Methods in Engineering Science
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
2001-01-01
Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized.
Equivalence between entanglement and the optimal fidelity of continuous variable teleportation.
Adesso, Gerardo; Illuminati, Fabrizio
2005-10-07
We devise the optimal form of Gaussian resource states enabling continuous-variable teleportation with maximal fidelity. We show that a nonclassical optimal fidelity of N-user teleportation networks is necessary and sufficient for N-party entangled Gaussian resources, yielding an estimator of multipartite entanglement. The entanglement of teleportation is equivalent to the entanglement of formation in a two-user protocol, and to the localizable entanglement in a multiuser one. Finally, we show that the continuous-variable tangle, quantifying entanglement sharing in three-mode Gaussian states, is defined operationally in terms of the optimal fidelity of a tripartite teleportation network.
Gravity Modeling for Variable Fidelity Environments
NASA Technical Reports Server (NTRS)
Madden, Michael M.
2006-01-01
Aerospace simulations can model worlds, such as the Earth, with differing levels of fidelity. The simulation may represent the world as a plane, a sphere, an ellipsoid, or a high-order closed surface. The world may or may not rotate. The user may select lower fidelity models based on computational limits, a need for simplified analysis, or comparison to other data. However, the user will also wish to retain a close semblance of behavior to the real world. The effects of gravity on objects are an important component of modeling real-world behavior. Engineers generally equate the term gravity with the observed free-fall acceleration. However, free-fall acceleration is not equal to all observers. To observers on the sur-face of a rotating world, free-fall acceleration is the sum of gravitational attraction and the centrifugal acceleration due to the world's rotation. On the other hand, free-fall acceleration equals gravitational attraction to an observer in inertial space. Surface-observed simulations (e.g. aircraft), which use non-rotating world models, may choose to model observed free fall acceleration as the gravity term; such a model actually combines gravitational at-traction with centrifugal acceleration due to the Earth s rotation. However, this modeling choice invites confusion as one evolves the simulation to higher fidelity world models or adds inertial observers. Care must be taken to model gravity in concert with the world model to avoid denigrating the fidelity of modeling observed free fall. The paper will go into greater depth on gravity modeling and the physical disparities and synergies that arise when coupling specific gravity models with world models.
Modeling movement and fidelity of American black ducks
Zimpfer, N.L.; Conroy, M.J.
2006-01-01
Spatial relationships among stocks of breeding waterfowl can be an important component of harvest management. Prediction and optimal harvest management under adaptive harvest management (AHM) requires information on the spatial relationships among breeding populations (fidelity and inter-year exchange), as well as rates of movements from breeding to harvest regions. We used band-recovery data to develop a model to estimate probabilities of movement for American black ducks (Anas rubripes) among 3 Canadian breeding strata and 6 harvest regions (3 in Canada, and 3 in the United States) over the period 1965-1998. Model selection criteria suggested that models containing area-, year-, and age-specific recovery rates with area- and sex-specific movement rates were the best for modeling movement. Movement by males to southern harvest areas was variable depending on the originating area. Males from the western breeding area predominantly moved to the Mississippi Flyway or southern Atlantic Flyway (??ij = 0.353, SE = 0.0187 and ??ij = 0.473, SE = 0.037, respectively), whereas males that originated in the eastern and central breeding strata moved to the northern Atlantic flyway (??ij = 0.842, SE = 0.010 and ??ij = 0.578, SE = 0.0222, respectively). We used combined recoveries and recaptures in Program MARK to estimate fidelity to the 3 Canadian breeding strata. Information criteria identified a model containing sex- and age-specific fidelity for black ducks. Estimates of fidelity were 0.9695 (SE = 0.0249) and 0.9554 (SE = 0.0434) for adult males and females, respectively. Estimates of fidelity for juveniles were slightly lower at 0.9210 (SE = 0.0931) and 0.8870 (SE = 0.0475) for males and females, respectively. These models have application to the development of spatially stratified black duck harvest management models for use in AHM.
NASA Astrophysics Data System (ADS)
Bryson, Dean Edward
A model's level of fidelity may be defined as its accuracy in faithfully reproducing a quantity or behavior of interest of a real system. Increasing the fidelity of a model often goes hand in hand with increasing its cost in terms of time, money, or computing resources. The traditional aircraft design process relies upon low-fidelity models for expedience and resource savings. However, the reduced accuracy and reliability of low-fidelity tools often lead to the discovery of design defects or inadequacies late in the design process. These deficiencies result either in costly changes or the acceptance of a configuration that does not meet expectations. The unknown opportunity cost is the discovery of superior vehicles that leverage phenomena unknown to the designer and not illuminated by low-fidelity tools. Multifidelity methods attempt to blend the increased accuracy and reliability of high-fidelity models with the reduced cost of low-fidelity models. In building surrogate models, where mathematical expressions are used to cheaply approximate the behavior of costly data, low-fidelity models may be sampled extensively to resolve the underlying trend, while high-fidelity data are reserved to correct inaccuracies at key locations. Similarly, in design optimization a low-fidelity model may be queried many times in the search for new, better designs, with a high-fidelity model being exercised only once per iteration to evaluate the candidate design. In this dissertation, a new multifidelity, gradient-based optimization algorithm is proposed. It differs from the standard trust region approach in several ways, stemming from the new method maintaining an approximation of the inverse Hessian, that is the underlying curvature of the design problem. Whereas the typical trust region approach performs a full sub-optimization using the low-fidelity model at every iteration, the new technique finds a suitable descent direction and focuses the search along it, reducing the number of low-fidelity evaluations required. This narrowing of the search domain also alleviates the burden on the surrogate model corrections between the low- and high-fidelity data. Rather than requiring the surrogate to be accurate in a hyper-volume bounded by the trust region, the model needs only to be accurate along the forward-looking search direction. Maintaining the approximate inverse Hessian also allows the multifidelity algorithm to revert to high-fidelity optimization at any time. In contrast, the standard approach has no memory of the previously-computed high-fidelity data. The primary disadvantage of the proposed algorithm is that it may require modifications to the optimization software, whereas standard optimizers may be used as black-box drivers in the typical trust region method. A multifidelity, multidisciplinary simulation of aeroelastic vehicle performance is developed to demonstrate the optimization method. The numerical physics models include body-fitted Euler computational fluid dynamics; linear, panel aerodynamics; linear, finite-element computational structural mechanics; and reduced, modal structural bases. A central element of the multifidelity, multidisciplinary framework is a shared parametric, attributed geometric representation that ensures the analysis inputs are consistent between disciplines and fidelities. The attributed geometry also enables the transfer of data between disciplines. The new optimization algorithm, a standard trust region approach, and a single-fidelity quasi-Newton method are compared for a series of analytic test functions, using both polynomial chaos expansions and kriging to correct discrepancies between fidelity levels of data. In the aggregate, the new method requires fewer high-fidelity evaluations than the trust region approach in 51% of cases, and the same number of evaluations in 18%. The new approach also requires fewer low-fidelity evaluations, by up to an order of magnitude, in almost all cases. The efficacy of both multifidelity methods compared to single-fidelity optimization depends significantly on the behavior of the high-fidelity model and the quality of the low-fidelity approximation, though savings are realized in a large number of cases. The multifidelity algorithm is also compared to the single-fidelity quasi-Newton method for complex aeroelastic simulations. The vehicle design problem includes variables for planform shape, structural sizing, and cruise condition with constraints on trim and structural stresses. Considering the objective function reduction versus computational expenditure, the multifidelity process performs better in three of four cases in early iterations. However, the enforcement of a contracting trust region slows the multifidelity progress. Even so, leveraging the approximate inverse Hessian, the optimization can be seamlessly continued using high-fidelity data alone. Ultimately, the proposed new algorithm produced better designs in all four cases. Investigating the return on investment in terms of design improvement per computational hour confirms that the multifidelity advantage is greatest in early iterations, and managing the transition to high-fidelity optimization is critical.
Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)
2001-01-01
Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are interfaced. This capability rapidly provides the high-fidelity results needed in the early design phase. Moreover, the capability is applicable to the general field of engineering science and mechanics. Hence, it provides a collaborative capability that accounts for interactions among engineering analysis methods.
Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors
NASA Astrophysics Data System (ADS)
Kim, J.; Waliser, Duane E.; Mattmann, Chris A.; Goodale, Cameron E.; Hart, Andrew F.; Zimdars, Paul A.; Crichton, Daniel J.; Jones, Colin; Nikulin, Grigory; Hewitson, Bruce; Jack, Chris; Lennard, Christopher; Favre, Alice
2014-03-01
Monthly-mean precipitation, mean (TAVG), maximum (TMAX) and minimum (TMIN) surface air temperatures, and cloudiness from the CORDEX-Africa regional climate model (RCM) hindcast experiment are evaluated for model skill and systematic biases. All RCMs simulate basic climatological features of these variables reasonably, but systematic biases also occur across these models. All RCMs show higher fidelity in simulating precipitation for the west part of Africa than for the east part, and for the tropics than for northern Sahara. Interannual variation in the wet season rainfall is better simulated for the western Sahel than for the Ethiopian Highlands. RCM skill is higher for TAVG and TMAX than for TMIN, and regionally, for the subtropics than for the tropics. RCM skill in simulating cloudiness is generally lower than for precipitation or temperatures. For all variables, multi-model ensemble (ENS) generally outperforms individual models included in ENS. An overarching conclusion in this study is that some model biases vary systematically for regions, variables, and metrics, posing difficulties in defining a single representative index to measure model fidelity, especially for constructing ENS. This is an important concern in climate change impact assessment studies because most assessment models are run for specific regions/sectors with forcing data derived from model outputs. Thus, model evaluation and ENS construction must be performed separately for regions, variables, and metrics as required by specific analysis and/or assessments. Evaluations using multiple reference datasets reveal that cross-examination, quality control, and uncertainty estimates of reference data are crucial in model evaluations.
NASA Astrophysics Data System (ADS)
Pang, Guofei; Perdikaris, Paris; Cai, Wei; Karniadakis, George Em
2017-11-01
The fractional advection-dispersion equation (FADE) can describe accurately the solute transport in groundwater but its fractional order has to be determined a priori. Here, we employ multi-fidelity Bayesian optimization to obtain the fractional order under various conditions, and we obtain more accurate results compared to previously published data. Moreover, the present method is very efficient as we use different levels of resolution to construct a stochastic surrogate model and quantify its uncertainty. We consider two different problem set ups. In the first set up, we obtain variable fractional orders of one-dimensional FADE, considering both synthetic and field data. In the second set up, we identify constant fractional orders of two-dimensional FADE using synthetic data. We employ multi-resolution simulations using two-level and three-level Gaussian process regression models to construct the surrogates.
Extending the Conceptualization of Listening Fidelity
ERIC Educational Resources Information Center
Fitch-Hauser, Margaret; Powers, William G.; O'Brien, Kelley; Hanson, Scott
2007-01-01
An exploration of variables potentially related to Listening Fidelity (LF) was conducted through two separate studies. Study 1 indicated that when the potential fidelity of the stimulus message was varied as a function of the number of words and time length, the message with lowest potential fidelity produced significantly lower LF than either the…
Procedural Fidelity: An Analysis of Measurement and Reporting Practices
ERIC Educational Resources Information Center
Ledford, Jennifer R.; Wolery, Mark
2013-01-01
A systematic analysis was conducted of measurement and reporting practices related to procedural fidelity in single-case research for the past 30 years. Previous reviews of fidelity primarily reported whether fidelity data were collected by authors; these reviews reported that collection was variable, but low across journals and over time. Results…
A simple, analytical, axisymmetric microburst model for downdraft estimation
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.
1991-01-01
A simple analytical microburst model was developed for use in estimating vertical winds from horizontal wind measurements. It is an axisymmetric, steady state model that uses shaping functions to satisfy the mass continuity equation and simulate boundary layer effects. The model is defined through four model variables: the radius and altitude of the maximum horizontal wind, a shaping function variable, and a scale factor. The model closely agrees with a high fidelity analytical model and measured data, particularily in the radial direction and at lower altitudes. At higher altitudes, the model tends to overestimate the wind magnitude relative to the measured data.
Takei, Nobuyuki; Yonezawa, Hidehiro; Aoki, Takao; Furusawa, Akira
2005-06-10
We experimentally demonstrate continuous-variable quantum teleportation beyond the no-cloning limit. We teleport a coherent state and achieve the fidelity of 0.70 +/- 0.02 that surpasses the no-cloning limit of 2/3. Surpassing the limit is necessary to transfer the nonclassicality of an input quantum state. By using our high-fidelity teleporter, we demonstrate entanglement swapping, namely, teleportation of quantum entanglement, as an example of transfer of nonclassicality.
Van Dongen, Hans P A; Caldwell, John A; Caldwell, J Lynn
2006-05-01
Laboratory research has revealed considerable systematic variability in the degree to which individuals' alertness and performance are affected by sleep deprivation. However, little is known about whether or not different populations exhibit similar levels of individual variability. In the present study, we examined individual variability in performance impairment due to sleep loss in a highly select population of militaryjet pilots. Ten active-duty F-117 pilots were deprived of sleep for 38 h and studied repeatedly in a high-fidelity flight simulator. Data were analyzed with a mixed-model ANOVA to quantify individual variability. Statistically significant, systematic individual differences in the effects of sleep deprivation were observed, even when baseline differences were accounted for. The findings suggest that highly select populations may exhibit individual differences in vulnerability to performance impairment from sleep loss just as the general population does. Thus, the scientific and operational communities' reliance on group data as opposed to individual data may entail substantial misestimation of the impact of job-related stressors on safety and performance.
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Friedlander, David; Kopasakis, George
2015-01-01
This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Friedlander, David; Kopasakis, George
2014-01-01
This paper covers the development of an integrated nonlinear dynamic simulation for a variable cycle turbofan engine and nozzle that can be integrated with an overall vehicle Aero-Propulso-Servo-Elastic (APSE) model. A previously developed variable cycle turbofan engine model is used for this study and is enhanced here to include variable guide vanes allowing for operation across the supersonic flight regime. The primary focus of this study is to improve the fidelity of the model's thrust response by replacing the simple choked flow equation convergent-divergent nozzle model with a MacCormack method based quasi-1D model. The dynamic response of the nozzle model using the MacCormack method is verified by comparing it against a model of the nozzle using the conservation element/solution element method. A methodology is also presented for the integration of the MacCormack nozzle model with the variable cycle engine.
Simple proof of the quantum benchmark fidelity for continuous-variable quantum devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Namiki, Ryo
2011-04-15
An experimental success criterion for continuous-variable quantum teleportation and memory is to surpass the limit of the average fidelity achieved by classical measure-and-prepare schemes with respect to a Gaussian-distributed set of coherent states. We present an alternative proof of the classical limit based on the familiar notions of state-channel duality and partial transposition. The present method enables us to produce a quantum-domain criterion associated with a given set of measured fidelities.
How Well Has Global Ocean Heat Content Variability Been Measured?
NASA Astrophysics Data System (ADS)
Nelson, A.; Weiss, J.; Fox-Kemper, B.; Fabienne, G.
2016-12-01
We introduce a new strategy that uses synthetic observations of an ensemble of model simulations to test the fidelity of an observational strategy, quantifying how well it captures the statistics of variability. We apply this test to the 0-700m global ocean heat content anomaly (OHCA) as observed with in-situ measurements by the Coriolis Dataset for Reanalysis (CORA), using the Community Climate System Model (CCSM) version 3.5. One-year running mean OHCAs for the years 2005 onward are found to faithfully capture the variability. During these years, synthetic observations of the model are strongly correlated at 0.94±0.06 with the actual state of the model. Overall, sub-annual variability and data before 2005 are significantly affected by the variability of the observing system. In contrast, the sometimes-used weighted integral of observations is not a good indicator of OHCA as variability in the observing system contaminates dynamical variability.
O'Campo, Patricia; Zerger, Suzanne; Gozdzik, Agnes; Jeyaratnam, Jeyagobi; Stergiopoulos, Vicky
2015-05-01
The importance of program implementation in achieving desired outcomes is well-documented, but there remains a need for concrete guidance on how to achieve fidelity to evidence-based models within dynamic local contexts. Housing First (HF), an evidence-based model for people experiencing homelessness and mental illness, provides an important test-case for such guidance; it targets a uniquely underserved subpopulation with complex needs, and is delivered by practitioners with varying knowledge and skill levels. Scientific evidence affirms HF's effectiveness, but its rapid dissemination has outpaced the ability to monitor not only whether it is being implemented with fidelity, but also how this can be achieved within variable local contexts and challenges. This qualitative study contributes to this need by capturing insights from practitioners on implementation challenges and specific strategies developed to overcome them. Findings reinforce the importance of developing HF-specific implementation guidelines, and of engaging relevant stakeholders throughout all phases of that development.
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
High Fidelity Modeling of Field Reversed Configuration (FRC) Thrusters
2015-10-01
dispersion depends on the Riemann solver • Variables are allowed to be discontinuous at the cell interfaces Advantages - Method is conservative...release; distribution unlimited Discontinuous Galerkin (2) • Riemann problems are solved at each interface to compute fluxes • The source of dissipation
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2016-10-01
Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.
Pitpitan, Eileen V; Chavarin, Claudia V; Semple, Shirley J; Mendoza, Doroteo; Rodriguez, Carlos Magis; Staines, Hugo; Aarons, Gregory A; Patterson, Thomas L
2017-06-01
Intervention fidelity and participant-level variables, such as negative attitudes towards condoms, are important variables to consider in the successful implementation of evidence-based HIV prevention interventions. Mujer Segura is an intervention that has been shown to be efficacious at reducing condomless sex for female sex workers (FSWs) in Mexico [1]. We examined main effects of fidelity, negative condom attitudes, and their interaction on the effectiveness of the Mujer Segura intervention at reducing condomless sex at intervention follow-up. Of the FSWs recruited from 13 cities across Mexico, 528 participated in the Mujer Segura intervention. We measured negative condom attitudes at baseline (comprising of beliefs and outcome evaluations) and condomless sex with clients at baseline and 6-month follow-up. Fidelity was measured by a fidelity checklist completed by independent raters; the sum of potentially 43 total elements completed by the counselor constituted fidelity. Complete fidelity was found in only 15.1% (n = 73) of sessions. There was no significant main effect of intervention fidelity on condomless sex with clients at follow-up. There was a significant and positive main effect of negative condom attitudes and a significant two-way interaction. At lower levels of fidelity, negative condom attitudes predicted greater condomless sex acts, whereas at higher levels of fidelity, the effect of condom attitudes became weaker. The results also indicated that the interaction between negative condom attitudes and fidelity were driven primarily by negative condom beliefs, as opposed to negative condom outcome evaluations. Ensuring treatment fidelity in an HIV prevention intervention is particularly important when participants have negative attitudes towards condoms.
Development of Adaptive Model Refinement (AMoR) for Multiphysics and Multifidelity Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turinsky, Paul
This project investigated the development and utilization of Adaptive Model Refinement (AMoR) for nuclear systems simulation applications. AMoR refers to utilization of several models of physical phenomena which differ in prediction fidelity. If the highest fidelity model is judged to always provide or exceeded the desired fidelity, than if one can determine the difference in a Quantity of Interest (QoI) between the highest fidelity model and lower fidelity models, one could utilize the fidelity model that would just provide the magnitude of the QoI desired. Assuming lower fidelity models require less computational resources, in this manner computational efficiency can bemore » realized provided the QoI value can be accurately and efficiently evaluated. This work utilized Generalized Perturbation Theory (GPT) to evaluate the QoI, by convoluting the GPT solution with the residual of the highest fidelity model determined using the solution from lower fidelity models. Specifically, a reactor core neutronics problem and thermal-hydraulics problem were studied to develop and utilize AMoR. The highest fidelity neutronics model was based upon the 3D space-time, two-group, nodal diffusion equations as solved in the NESTLE computer code. Added to the NESTLE code was the ability to determine the time-dependent GPT neutron flux. The lower fidelity neutronics model was based upon the point kinetics equations along with utilization of a prolongation operator to determine the 3D space-time, two-group flux. The highest fidelity thermal-hydraulics model was based upon the space-time equations governing fluid flow in a closed channel around a heat generating fuel rod. The Homogenous Equilibrium Mixture (HEM) model was used for the fluid and Finite Difference Method was applied to both the coolant and fuel pin energy conservation equations. The lower fidelity thermal-hydraulic model was based upon the same equations as used for the highest fidelity model but now with coarse spatial meshing, corrected somewhat by employing effective fuel heat conduction values. The effectiveness of switching between the highest fidelity model and lower fidelity model as a function of time was assessed using the neutronics problem. Based upon work completed to date, one concludes that the time switching is effective in annealing out differences between the highest and lower fidelity solutions. The effectiveness of using a lower fidelity GPT solution, along with a prolongation operator, to estimate the QoI was also assessed. The utilization of a lower fidelity GPT solution was done in an attempt to avoid the high computational burden associated with solving for the highest fidelity GPT solution. Based upon work completed to date, one concludes that the lower fidelity adjoint solution is not sufficiently accurate with regard to estimating the QoI; however, a formulation has been revealed that may provide a path for addressing this shortcoming.« less
ERIC Educational Resources Information Center
O'Keeffe, Breda Victoria
2009-01-01
Improving educational outcomes involves many variables, including identifying effective interventions and ensuring that they are effectively implemented in schools. Within a "response to intervention" model, treatment integrity of academic interventions has become increasingly important. However, recent research has suggested that…
Probabilistic Based Modeling and Simulation Assessment
2010-06-01
different crash and blast scenarios. With the integration of the high fidelity neck and head model, a methodology to calculate the probability of injury...variability, correlation, and multiple (often competing) failure metrics. Important scenarios include vehicular collisions, blast /fragment impact, and...first area of focus is to develop a methodology to integrate probabilistic analysis into finite element analysis of vehicle collisions and blast . The
Probabilistic-Based Modeling and Simulation Assessment
2010-06-01
developed to determine the relative importance of structural components of the vehicle under differnet crash and blast scenarios. With the integration of...the vehicle under different crash and blast scenarios. With the integration of the high fidelity neck and head model, a methodology to calculate the...parameter variability, correlation, and multiple (often competing) failure metrics. Important scenarios include vehicular collisions, blast /fragment
Adam Wolf; Kanat Akshalov; Nicanor Saliendra; Douglas A. Johnson; Emilio A. Laca
2006-01-01
Canopy fluxes of CO2 and energy can be modeled with high fidelity using a small number of environmental variables and ecosystem parameters. Although these ecosystem parameters are critically important for modeling canopy fluxes, they typically are not measured with the same intensity as ecosystem fluxes. We developed an algorithm to estimate leaf...
Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.
2004-01-01
Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.
Multi-fidelity stochastic collocation method for computation of statistical moments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu; Linebarger, Erin M., E-mail: aerinline@sci.utah.edu; Xiu, Dongbin, E-mail: xiu.16@osu.edu
We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.
Best Design for Multidimensional Computerized Adaptive Testing with the Bifactor Model
ERIC Educational Resources Information Center
Seo, Dong Gi; Weiss, David J.
2015-01-01
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm…
Statistical Time Series Models of Pilot Control with Applications to Instrument Discrimination
NASA Technical Reports Server (NTRS)
Altschul, R. E.; Nagel, P. M.; Oliver, F.
1984-01-01
A general description of the methodology used in obtaining the transfer function models and verification of model fidelity, frequency domain plots of the modeled transfer functions, numerical results obtained from an analysis of poles and zeroes obtained from z plane to s-plane conversions of the transfer functions, and the results of a study on the sequential introduction of other variables, both exogenous and endogenous into the loop are contained.
An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian
For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less
Validating a Fidelity Scale to Understand Intervention Effects in Classroom-Based Studies
ERIC Educational Resources Information Center
Buckley, Pamela; Moore, Brooke; Boardman, Alison G.; Arya, Diana J.; Maul, Andrew
2017-01-01
K-12 intervention studies often include fidelity of implementation (FOI) as a mediating variable, though most do not report the validity of fidelity measures. This article discusses the critical need for validated FOI scales. To illustrate our point, we describe the development and validation of the Implementation Validity Checklist (IVC-R), an…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Jia-Xing; Hu, Yuan; Jin, Yu
An array of ultracold polar molecules trapped in an external electric field is regarded as a promising carrier of quantum information. Under the action of this field, molecules are compelled to undergo pendular oscillations by the Stark effect. Particular attention has been paid to the influence of intrinsic decoherence on the model of linear polar molecular pendular states, thereby we evaluate the tripartite entanglement with negativity, as well as fidelity of bipartite quantum systems for input and output signals using electric dipole moments of polar molecules as qubits. According to this study, we consider three typical initial states for bothmore » systems, respectively, and investigate the temporal evolution with variable values of the external field intensity, the intrinsic decoherence factor, and the dipole-dipole interaction. Thus, we demonstrate the sound selection of these three main parameters to obtain the best entanglement degree and fidelity.« less
Han, Jia-Xing; Hu, Yuan; Jin, Yu; Zhang, Guo-Feng
2016-04-07
An array of ultracold polar molecules trapped in an external electric field is regarded as a promising carrier of quantum information. Under the action of this field, molecules are compelled to undergo pendular oscillations by the Stark effect. Particular attention has been paid to the influence of intrinsic decoherence on the model of linear polar molecular pendular states, thereby we evaluate the tripartite entanglement with negativity, as well as fidelity of bipartite quantum systems for input and output signals using electric dipole moments of polar molecules as qubits. According to this study, we consider three typical initial states for both systems, respectively, and investigate the temporal evolution with variable values of the external field intensity, the intrinsic decoherence factor, and the dipole-dipole interaction. Thus, we demonstrate the sound selection of these three main parameters to obtain the best entanglement degree and fidelity.
Vidal, Á M; Vieira, L J; Ferreira, C F; Souza, F V D; Souza, A S; Ledo, C A S
2015-07-14
Molecular markers are efficient for assessing the genetic fidelity of various species of plants after in vitro culture. In this study, we evaluated the genetic fidelity and variability of micropropagated cassava plants (Manihot esculenta Crantz) using inter-simple sequence repeat markers. Twenty-two cassava accessions from the Embrapa Cassava & Fruits Germplasm Bank were used. For each accession, DNA was extracted from a plant maintained in the field and from 3 plants grown in vitro. For DNA amplification, 27 inter-simple sequence repeat primers were used, of which 24 generated 175 bands; 100 of those bands were polymorphic and were used to study genetic variability among accessions of cassava plants maintained in the field. Based on the genetic distance matrix calculated using the arithmetic complement of the Jaccard's index, genotypes were clustered using the unweighted pair group method using arithmetic averages. The number of bands per primer was 2-13, with an average of 7.3. For most micropropagated accessions, the fidelity study showed no genetic variation between plants of the same accessions maintained in the field and those maintained in vitro, confirming the high genetic fidelity of the micropropagated plants. However, genetic variability was observed among different accessions grown in the field, and clustering based on the dissimilarity matrix revealed 7 groups. Inter-simple sequence repeat markers were efficient for detecting the genetic homogeneity of cassava plants derived from meristem culture, demonstrating the reliability of this propagation system.
ERIC Educational Resources Information Center
Shin, Nana; Vaughn, Brian E.; Kim, Mina; Krzysik, Lisa; Bost, Kelly K.; McBride, Brent; Santos, Antonio J.; Peceguina, Ines; Coppola, Gabrielle
2011-01-01
Achieving consensus on the definition and measurement of social competence (SC) for preschool children has proven difficult in the developmental sciences. We tested a hierarchical model in which SC is assumed to be a second-order latent variable by using longitudinal data (N = 345). We also tested the degree to which peer SC at Time 1 predicted…
Fidelity of the representation of value in decision-making
Dowding, Ben A.
2017-01-01
The ability to make optimal decisions depends on evaluating the expected rewards associated with different potential actions. This process is critically dependent on the fidelity with which reward value information can be maintained in the nervous system. Here we directly probe the fidelity of value representation following a standard reinforcement learning task. The results demonstrate a previously-unrecognized bias in the representation of value: extreme reward values, both low and high, are stored significantly more accurately and precisely than intermediate rewards. The symmetry between low and high rewards pertained despite substantially higher frequency of exposure to high rewards, resulting from preferential exploitation of more rewarding options. The observed variation in fidelity of value representation retrospectively predicted performance on the reinforcement learning task, demonstrating that the bias in representation has an impact on decision-making. A second experiment in which one or other extreme-valued option was omitted from the learning sequence showed that representational fidelity is primarily determined by the relative position of an encoded value on the scale of rewards experienced during learning. Both variability and guessing decreased with the reduction in the number of options, consistent with allocation of a limited representational resource. These findings have implications for existing models of reward-based learning, which typically assume defectless representation of reward value. PMID:28248958
ERIC Educational Resources Information Center
Abry, Tashia D. S.; Rimm-Kaufman, Sara E.; Larsen, Ross A.; Brewer, Alix J.
2011-01-01
The present study examines data collected during the second year of a three-year longitudinal cluster randomized controlled trial, the Responsive Classroom Efficacy Study (RCES). In the context of and RCT, the research questions address naturally occurring variability in the independent variables of interest (i.e., teachers' (fidelity of…
Toomey, Elaine; Matthews, James; Hurley, Deirdre A
2017-08-04
Despite an increasing awareness of the importance of fidelity of delivery within complex behaviour change interventions, it is often poorly assessed. This mixed methods study aimed to establish the fidelity of delivery of a complex self-management intervention and explore the reasons for these findings using a convergent/triangulation design. Feasibility trial of the Self-management of Osteoarthritis and Low back pain through Activity and Skills (SOLAS) intervention (ISRCTN49875385), delivered in primary care physiotherapy. 60 SOLAS sessions were delivered across seven sites by nine physiotherapists. Fidelity of delivery of prespecified intervention components was evaluated using (1) audio-recordings (n=60), direct observations (n=24) and self-report checklists (n=60) and (2) individual interviews with physiotherapists (n=9). Quantitatively, fidelity scores were calculated using percentage means and SD of components delivered. Associations between fidelity scores and physiotherapist variables were analysed using Spearman's correlations. Interviews were analysed using thematic analysis to explore potential reasons for fidelity scores. Integration of quantitative and qualitative data occurred at an interpretation level using triangulation. Quantitatively, fidelity scores were high for all assessment methods; with self-report (92.7%) consistently higher than direct observations (82.7%) or audio-recordings (81.7%). There was significant variation between physiotherapists' individual scores (69.8% - 100%). Both qualitative and quantitative data (from physiotherapist variables) found that physiotherapists' knowledge (Spearman's association at p=0.003) and previous experience (p=0.008) were factors that influenced their fidelity. The qualitative data also postulated participant-level (eg, individual needs) and programme-level factors (eg, resources) as additional elements that influenced fidelity. The intervention was delivered with high fidelity. This study contributes to the limited evidence regarding fidelity assessment methods within complex behaviour change interventions. The findings suggest a combination of quantitative methods is suitable for the assessment of fidelity of delivery. A mixed methods approach provided a more insightful understanding of fidelity and its influencing factors. ISRCTN49875385; Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Toomey, Elaine; Matthews, James; Hurley, Deirdre A
2017-01-01
Objectives and design Despite an increasing awareness of the importance of fidelity of delivery within complex behaviour change interventions, it is often poorly assessed. This mixed methods study aimed to establish the fidelity of delivery of a complex self-management intervention and explore the reasons for these findings using a convergent/triangulation design. Setting Feasibility trial of the Self-management of Osteoarthritis and Low back pain through Activity and Skills (SOLAS) intervention (ISRCTN49875385), delivered in primary care physiotherapy. Methods and outcomes 60 SOLAS sessions were delivered across seven sites by nine physiotherapists. Fidelity of delivery of prespecified intervention components was evaluated using (1) audio-recordings (n=60), direct observations (n=24) and self-report checklists (n=60) and (2) individual interviews with physiotherapists (n=9). Quantitatively, fidelity scores were calculated using percentage means and SD of components delivered. Associations between fidelity scores and physiotherapist variables were analysed using Spearman’s correlations. Interviews were analysed using thematic analysis to explore potential reasons for fidelity scores. Integration of quantitative and qualitative data occurred at an interpretation level using triangulation. Results Quantitatively, fidelity scores were high for all assessment methods; with self-report (92.7%) consistently higher than direct observations (82.7%) or audio-recordings (81.7%). There was significant variation between physiotherapists’ individual scores (69.8% - 100%). Both qualitative and quantitative data (from physiotherapist variables) found that physiotherapists’ knowledge (Spearman’s association at p=0.003) and previous experience (p=0.008) were factors that influenced their fidelity. The qualitative data also postulated participant-level (eg, individual needs) and programme-level factors (eg, resources) as additional elements that influenced fidelity. Conclusion The intervention was delivered with high fidelity. This study contributes to the limited evidence regarding fidelity assessment methods within complex behaviour change interventions. The findings suggest a combination of quantitative methods is suitable for the assessment of fidelity of delivery. A mixed methods approach provided a more insightful understanding of fidelity and its influencing factors. Trial registration number ISRCTN49875385; Pre-results. PMID:28780544
2014-07-01
of models for variable conditions: – Use implicit models to eliminate constraint of sequence of fast time scales: c, ve, – Price to pay: lack...collisions: – Elastic – Bragiinski terms – Inelastic – warning! Rates depend on both T and relative velocity – Multi-fluid CR model from...merge/split for particle management, efficient sampling, inelastic collisions … – Level grouping schemes of electronic states, for dynamical coarse
Population coding in sparsely connected networks of noisy neurons.
Tripp, Bryan P; Orchard, Jeff
2012-01-01
This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.
Dunne, John P.; John, Jasmin G.; Adcroft, Alistair J.; Griffies, Stephen M.; Hallberg, Robert W.; Shevalikova, Elena; Stouffer, Ronald J.; Cooke, William; Dunne, Krista A.; Harrison, Matthew J.; Krasting, John P.; Malyshev, Sergey L.; Milly, P.C.D.; Phillipps, Peter J.; Sentman, Lori A.; Samuels, Bonita L.; Spelman, Michael J.; Winton, Michael; Wittenberg, Andrew T.; Zadeh, Niki
2012-01-01
We describe the physical climate formulation and simulation characteristics of two new global coupled carbon-climate Earth System Models, ESM2M and ESM2G. These models demonstrate similar climate fidelity as the Geophysical Fluid Dynamics Laboratory's previous CM2.1 climate model while incorporating explicit and consistent carbon dynamics. The two models differ exclusively in the physical ocean component; ESM2M uses Modular Ocean Model version 4.1 with vertical pressure layers while ESM2G uses Generalized Ocean Layer Dynamics with a bulk mixed layer and interior isopycnal layers. Differences in the ocean mean state include the thermocline depth being relatively deep in ESM2M and relatively shallow in ESM2G compared to observations. The crucial role of ocean dynamics on climate variability is highlighted in the El Niño-Southern Oscillation being overly strong in ESM2M and overly weak ESM2G relative to observations. Thus, while ESM2G might better represent climate changes relating to: total heat content variability given its lack of long term drift, gyre circulation and ventilation in the North Pacific, tropical Atlantic and Indian Oceans, and depth structure in the overturning and abyssal flows, ESM2M might better represent climate changes relating to: surface circulation given its superior surface temperature, salinity and height patterns, tropical Pacific circulation and variability, and Southern Ocean dynamics. Our overall assessment is that neither model is fundamentally superior to the other, and that both models achieve sufficient fidelity to allow meaningful climate and earth system modeling applications. This affords us the ability to assess the role of ocean configuration on earth system interactions in the context of two state-of-the-art coupled carbon-climate models.
Gonzalez-Cota, Alan; Chiravuri, Srinivas; Stansfield, R Brent; Brummett, Chad M; Hamstra, Stanley J
2013-01-01
The purpose of this study was to determine whether high-fidelity simulators provide greater benefit than low-fidelity models in training fluoroscopy-guided transforaminal epidural injection. This educational study was a single-center, prospective, randomized 3-arm pretest-posttest design with a control arm. Eighteen anesthesia and physical medicine and rehabilitation residents were instructed how to perform a fluoroscopy-guided transforaminal epidural injection and assessed by experts on a reusable injectable phantom cadaver. The high- and low-fidelity groups received 30 minutes of supervised hands-on practice according to group assignment, and the control group received 30 minutes of didactic instruction from an expert. We found no differences at posttest between the high- and low-fidelity groups on global ratings of performance (P = 0.17) or checklist scores (P = 0.81). Participants who received either form of hands-on training significantly outperformed the control group on both the global rating of performance (control vs low-fidelity, P = 0.0048; control vs high-fidelity, P = 0.0047) and the checklist (control vs low-fidelity, P = 0.0047; control vs high-fidelity, P = 0.0047). Training an epidural procedure using a low-fidelity model may be equally effective as training on a high-fidelity model. These results are consistent with previous research on a variety of interventional procedures and further demonstrate the potential impact of simple, low-fidelity training models.
NASA Astrophysics Data System (ADS)
Kenway, Gaetan K. W.
This thesis presents new tools and techniques developed to address the challenging problem of high-fidelity aerostructural optimization with respect to large numbers of design variables. A new mesh-movement scheme is developed that is both computationally efficient and sufficiently robust to accommodate large geometric design changes and aerostructural deformations. A fully coupled Newton-Krylov method is presented that accelerates the convergence of aerostructural systems and provides a 20% performance improvement over the traditional nonlinear block Gauss-Seidel approach and can handle more exible structures. A coupled adjoint method is used that efficiently computes derivatives for a gradient-based optimization algorithm. The implementation uses only machine accurate derivative techniques and is verified to yield fully consistent derivatives by comparing against the complex step method. The fully-coupled large-scale coupled adjoint solution method is shown to have 30% better performance than the segregated approach. The parallel scalability of the coupled adjoint technique is demonstrated on an Euler Computational Fluid Dynamics (CFD) model with more than 80 million state variables coupled to a detailed structural finite-element model of the wing with more than 1 million degrees of freedom. Multi-point high-fidelity aerostructural optimizations of a long-range wide-body, transonic transport aircraft configuration are performed using the developed techniques. The aerostructural analysis employs Euler CFD with a 2 million cell mesh and a structural finite element model with 300 000 DOF. Two design optimization problems are solved: one where takeoff gross weight is minimized, and another where fuel burn is minimized. Each optimization uses a multi-point formulation with 5 cruise conditions and 2 maneuver conditions. The optimization problems have 476 design variables are optimal results are obtained within 36 hours of wall time using 435 processors. The TOGW minimization results in a 4.2% reduction in TOGW with a 6.6% fuel burn reduction, while the fuel burn optimization resulted in a 11.2% fuel burn reduction with no change to the takeoff gross weight.
Methodology for Variable Fidelity Multistage Optimization under Uncertainty
2011-03-31
problem selected for the application of the new optimization methodology is a Single Stage To Orbit ( SSTO ) expendable launch vehicle (ELV). Three...the primary exercise of the variable fidelity optimization portion of the code. SSTO vehicles have been discussed almost exclusively in the context...of reusable launch vehicles (RLV). There is very little discussion in recent literature of SSTO designs which are expendable. In the light of the
21st Century Power Partnership: September 2016 Fellowship Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reber, Timothy J.; Rambau, Prudence; Mdhluli, Sipho
This report details the 21st Century Power Partnership fellowship from September 2016. This Fellowship is a follow-up to the Technical Audit of Eskom's Medium- and Long-term Modelling Capabilities, conducted by U.S. National Renewable Energy Laboratory (NREL) in April 2016. The prospect and role of variable renewable energy (vRE) in South Africa poses new modelling-related challenges that Eskom is actively working to address by improving the fidelity of PLEXOS LT and ST models.
Hays, G C
2000-09-21
Sea turtles nest on sandy beaches and tend to show high fidelity to specific nesting areas, but, despite this fidelity, the inter-annual variation in nesting numbers may be large. This variation may reflect the fact that turtles do not usually nest in consecutive years. Here, theoretical models are developed in which the interval between successive nesting years (the remigration interval) reflects conditions encountered on the feeding grounds, with good feeding years leading to a reduction in the remigration interval and vice versa. These simple models produce high levels of inter-annual variation in nesting numbers with, on occasion, almost no turtles nesting in some years even when the population is large and stable. The implications for assessing the size of sea turtle populations are considered. Copyright 2000 Academic Press.
2014-03-27
fidelity. This pairing is accomplished through the use of a space mapping technique, which is a process where the design space of a lower fidelity model...is aligned a higher fidelity model. The intent of applying space mapping techniques to the field of surrogate construction is to leverage the
Horton, Douglas; Rotondo, Emma; Paz Ybarnegaray, Rodrigo; Hareau, Guy; Devaux, André; Thiele, Graham
2013-08-01
Participatory approaches are frequently recommended for international development programs, but few have been evaluated. From 2007 to 2010 the Andean Change Alliance evaluated an agricultural research and development approach known as the "Participatory Market Chain Approach" (PMCA). Based on a study of four cases, this paper examines the fidelity of implementation, the factors that influenced implementation and results, and the PMCA change model. We identify three types of deviation from the intervention protocol (lapses, creative adaptations, and true infidelities) and five groups of variables that influenced PMCA implementation and results (attributes of the macro context, the market chain, the key actors, rules in use, and the capacity development strategy). There was insufficient information to test the validity of the PMCA change model, but results were greatest where the PMCA was implemented with highest fidelity. Our analysis suggests that the single most critical component of the PMCA is engagement of market agents - not just farmers - throughout the exercise. We present four lessons for planning and evaluating participatory approaches related to the use of action and change models, the importance of monitoring implementation fidelity, the limits of baseline survey data for outcome evaluation, and the importance of capacity development for implementers. Copyright © 2013 Elsevier Ltd. All rights reserved.
Liquefied Bleed for Stability and Efficiency of High Speed Inlets
NASA Technical Reports Server (NTRS)
Saunders, J. David; Davis, David; Barsi, Stephen J.; Deans, Matthew C.; Weir, Lois J.; Sanders, Bobby W.
2014-01-01
A mission analysis code was developed to perform a trade study on the effectiveness of liquefying bleed for the inlet of the first stage of a TSTO vehicle. By liquefying bleed, the vehicle weight (TOGW) could be reduced by 7 to 23%. Numerous simplifying assumptions were made and lessons were learned. Increased accuracy in future analyses can be achieved by: Including a higher fidelity model to capture the effect of rescaling (variable vehicle TOGW). Refining specific thrust and impulse models ( T m a and Isp) to preserve fuel-to-air ratio. Implementing LH2 for T m a and Isp. Correlating baseline design to other mission analyses and correcting vehicle design elements. Implementing angle-of-attack effects on inlet characteristics. Refining aerodynamic performance (to improve L/D ratio at higher Mach numbers). Examining the benefit with partial cooling or densification of the bleed air stream. Incorporating higher fidelity weight estimates for the liquefied bleed system (heat exchange and liquid storage versus bleed duct weights) could be added when more fully developed. Adding trim drag or 6-degree-of-freedom trajectory analysis for higher fidelity. Investigating vehicle optimization for each of the bleed configurations.
Continuous-variable controlled-Z gate using an atomic ensemble
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Mingfeng; Jiang Nianquan; Jin Qingli
2011-06-15
The continuous-variable controlled-Z gate is a canonical two-mode gate for universal continuous-variable quantum computation. It is considered as one of the most fundamental continuous-variable quantum gates. Here we present a scheme for realizing continuous-variable controlled-Z gate between two optical beams using an atomic ensemble. The gate is performed by simply sending the two beams propagating in two orthogonal directions twice through a spin-squeezed atomic medium. Its fidelity can run up to one if the input atomic state is infinitely squeezed. Considering the noise effects due to atomic decoherence and light losses, we show that the observed fidelities of the schememore » are still quite high within presently available techniques.« less
A conceptual framework for implementation fidelity
Carroll, Christopher; Patterson, Malcolm; Wood, Stephen; Booth, Andrew; Rick, Jo; Balain, Shashi
2007-01-01
Background Implementation fidelity refers to the degree to which an intervention or programme is delivered as intended. Only by understanding and measuring whether an intervention has been implemented with fidelity can researchers and practitioners gain a better understanding of how and why an intervention works, and the extent to which outcomes can be improved. Discussion The authors undertook a critical review of existing conceptualisations of implementation fidelity and developed a new conceptual framework for understanding and measuring the process. The resulting theoretical framework requires testing by empirical research. Summary Implementation fidelity is an important source of variation affecting the credibility and utility of research. The conceptual framework presented here offers a means for measuring this variable and understanding its place in the process of intervention implementation. PMID:18053122
Assessing fidelity of delivery of smoking cessation behavioural support in practice.
Lorencatto, Fabiana; West, Robert; Christopherson, Charlotte; Michie, Susan
2013-04-04
Effectiveness of evidence-based behaviour change interventions is likely to be undermined by failure to deliver interventions as planned. Behavioural support for smoking cessation can be a highly cost-effective, life-saving intervention. However, in practice, outcomes are highly variable. Part of this may be due to variability in fidelity of intervention implementation. To date, there have been no published studies on this. The present study aimed to: evaluate a method for assessing fidelity of behavioural support; assess fidelity of delivery in two English Stop-Smoking Services; and compare the extent of fidelity according to session types, duration, individual practitioners, and component behaviour change techniques (BCTs). Treatment manuals and transcripts of 34 audio-recorded behavioural support sessions were obtained from two Stop-Smoking Services and coded into component BCTs using a taxonomy of 43 BCTs. Inter-rater reliability was assessed using percentage agreement. Fidelity was assessed by examining the proportion of BCTs specified in the manuals that were delivered in individual sessions. This was assessed by session type (i.e., pre-quit, quit, post-quit), duration, individual practitioner, and BCT. Inter-coder reliability was high (87.1%). On average, 66% of manual-specified BCTs were delivered per session (SD 15.3, range: 35% to 90%). In Service 1, average fidelity was highest for post-quit sessions (69%) and lowest for pre-quit (58%). In Service 2, fidelity was highest for quit-day (81%) and lowest for post-quit sessions (56%). Session duration was not significantly correlated with fidelity. Individual practitioner fidelity ranged from 55% to 78%. Individual manual-specified BCTs were delivered on average 63% of the time (SD 28.5, range: 0 to 100%). The extent to which smoking cessation behavioural support is delivered as specified in treatment manuals can be reliably assessed using transcripts of audiotaped sessions. This allows the investigation of the implementation of evidence-based practice in relation to smoking cessation, a first step in designing interventions to improve it. There are grounds for believing that fidelity in the English Stop-Smoking Services may be low and that routine monitoring is warranted.
Yielding physically-interpretable emulators - A Sparse PCA approach
NASA Astrophysics Data System (ADS)
Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.
2015-12-01
Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.
Simulation of South-Asian Summer Monsoon in a GCM
NASA Astrophysics Data System (ADS)
Ajayamohan, R. S.
2007-10-01
Major characteristics of Indian summer monsoon climate are analyzed using simulations from the upgraded version of Florida State University Global Spectral Model (FSUGSM). The Indian monsoon has been studied in terms of mean precipitation and low-level and upper-level circulation patterns and compared with observations. In addition, the model's fidelity in simulating observed monsoon intraseasonal variability, interannual variability and teleconnection patterns is examined. The model is successful in simulating the major rainbelts over the Indian monsoon region. However, the model exhibits bias in simulating the precipitation bands over the South China Sea and the West Pacific region. Seasonal mean circulation patterns of low-level and upper-level winds are consistent with the model's precipitation pattern. Basic features like onset and peak phase of monsoon are realistically simulated. However, model simulation indicates an early withdrawal of monsoon. Northward propagation of rainbelts over the Indian continent is simulated fairly well, but the propagation is weak over the ocean. The model simulates the meridional dipole structure associated with the monsoon intraseasonal variability realistically. The model is unable to capture the observed interannual variability of monsoon and its teleconnection patterns. Estimate of potential predictability of the model reveals the dominating influence of internal variability over the Indian monsoon region.
Mode Selection Techniques in Variable Mass Flexible Body Modeling
NASA Technical Reports Server (NTRS)
Quiocho, Leslie J.; Ghosh, Tushar K.; Frenkel, David; Huynh, An
2010-01-01
In developing a flexible body spacecraft simulation for the Launch Abort System of the Orion vehicle, when a rapid mass depletion takes place, the dynamics problem with time varying eigenmodes had to be addressed. Three different techniques were implemented, with different trade-offs made between performance and fidelity. A number of technical issues had to be solved in the process. This paper covers the background of the variable mass flexibility problem, the three approaches to simulating it, and the technical issues that were solved in formulating and implementing them.
NASA Technical Reports Server (NTRS)
Davis, Anthony B.; Frakenbert, Christian
2012-01-01
Success in three aspects of OCO-2 mission is threatened by unaccounted spa,al variability effects, all involving atmospheric scattering: 1. Low/moderately opaque clouds can escape the prescreening by mimicking a brighter surface. 2. Prescreening does not account for long-range radia,ve impact (adjacency effect) of nearby clouds. Need for extended cloud masking? 3. Oblique looks in target mode are highly exposed to surface adjacency and aerosol variability effects.We'll be covering all three bases!
Denadai, Rafael; Oshiiwa, Marie; Saad-Hossne, Rogério
2014-03-01
The search for alternative and effective forms of training simulation is needed due to ethical and medico-legal aspects involved in training surgical skills on living patients, human cadavers and living animals. To evaluate if the bench model fidelity interferes in the acquisition of elliptical excision skills by novice medical students. Forty novice medical students were randomly assigned to 5 practice conditions with instructor-directed elliptical excision skills' training (n = 8): didactic materials (control); organic bench model (low-fidelity); ethylene-vinyl acetate bench model (low-fidelity); chicken legs' skin bench model (high-fidelity); or pig foot skin bench model (high-fidelity). Pre- and post-tests were applied. Global rating scale, effect size, and self-perceived confidence based on Likert scale were used to evaluate all elliptical excision performances. The analysis showed that after training, the students practicing on bench models had better performance based on Global rating scale (all P < 0.0000) and felt more confident to perform elliptical excision skills (all P < 0.0000) when compared to the control. There was no significant difference (all P > 0.05) between the groups that trained on bench models. The magnitude of the effect (basic cutaneous surgery skills' training) was considered large (>0.80) in all measurements. The acquisition of elliptical excision skills after instructor-directed training on low-fidelity bench models was similar to the training on high-fidelity bench models; and there was a more substantial increase in elliptical excision performances of students that trained on all simulators compared to the learning on didactic materials.
NASA Astrophysics Data System (ADS)
Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph
2018-05-01
This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.
Eastern Renewable Generation Integration Study: Redefining What’s Possible for Renewable Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloom, Aaron
NREL project manager Aaron Bloom introduces NREL’s Eastern Renewable Generation Integration Study (ERGIS) and high-performance computing capabilities and new methodologies that allowed NREL to model operations of the Eastern Interconnection at unprecedented fidelity. ERGIS shows that the Eastern Interconnection can balance the variability and uncertainty of wind and solar photovoltaics at a 5-minute level, for one simulated year.
Flight simulator fidelity assessment in a rotorcraft lateral translation maneuver
NASA Technical Reports Server (NTRS)
Hess, R. A.; Malsbury, T.; Atencio, A., Jr.
1992-01-01
A model-based methodology for assessing flight simulator fidelity in closed-loop fashion is exercised in analyzing a rotorcraft low-altitude maneuver for which flight test and simulation results were available. The addition of a handling qualities sensitivity function to a previously developed model-based assessment criteria allows an analytical comparison of both performance and handling qualities between simulation and flight test. Model predictions regarding the existence of simulator fidelity problems are corroborated by experiment. The modeling approach is used to assess analytically the effects of modifying simulator characteristics on simulator fidelity.
NASA Technical Reports Server (NTRS)
Persing, T. Ray; Bellish, Christine A.; Brandon, Jay; Kenney, P. Sean; Carzoo, Susan; Buttrill, Catherine; Guenther, Arlene
2005-01-01
Several aircraft airframe modeling approaches are currently being used in the DoD community for acquisition, threat evaluation, training, and other purposes. To date there has been no clear empirical study of the impact of airframe simulation fidelity on piloted real-time aircraft simulation study results, or when use of a particular level of fidelity is indicated. This paper documents a series of piloted simulation studies using three different levels of airframe model fidelity. This study was conducted using the NASA Langley Differential Maneuvering Simulator. Evaluations were conducted with three pilots for scenarios requiring extensive maneuvering of the airplanes during air combat. In many cases, a low-fidelity modified point-mass model may be sufficient to evaluate the combat effectiveness of the aircraft. However, in cases where high angle-of-attack flying qualities and aerodynamic performance are a factor or when precision tracking ability of the aircraft must be represented, use of high-fidelity models is indicated.
Brady, Teresa J; Murphy, Louise B; O'Colmain, Benita J; Hobson, Reeti Desai
2017-09-01
To evaluate whether implementation factors or fidelity moderate chronic disease self-management education program outcomes. Meta-analysis of 34 Arthritis Self-Management Program and Chronic Disease Self-Management Program studies. Community. N = 10 792. Twelve implementation factors: program delivery fidelity and setting and leader and participant characteristics. Eighteen program outcomes: self-reported health behaviors, physical health status, psychological health status, and health-care utilization. Meta-analysis using pooled effect sizes. Modest to moderate statistically significant differences for 4 of 6 implementation factors; these findings were counterintuitive with better outcomes when leaders and participants were unpaid, leaders had less than minimum training, and implementation did not meet fidelity requirements. Exploratory study findings suggest that these interventions tolerate some variability in implementation factors. Further work is needed to identify key elements where fidelity is essential for intervention effectiveness.
Efficient Numerical Simulation of Aerothermoelastic Hypersonic Vehicles
NASA Astrophysics Data System (ADS)
Klock, Ryan J.
Hypersonic vehicles operate in a high-energy flight environment characterized by high dynamic pressures, high thermal loads, and non-equilibrium flow dynamics. This environment induces strong fluid, thermal, and structural dynamics interactions that are unique to this flight regime. If these vehicles are to be effectively designed and controlled, then a robust and intuitive understanding of each of these disciplines must be developed not only in isolation, but also when coupled. Limitations on scaling and the availability of adequate test facilities mean that physical investigation is infeasible. Ever growing computational power offers the ability to perform elaborate numerical simulations, but also has its own limitations. The state of the art in numerical simulation is either to create ever more high-fidelity physics models that do not couple well and require too much processing power to consider more than a few seconds of flight, or to use low-fidelity analytical models that can be tightly coupled and processed quickly, but do not represent realistic systems due to their simplifying assumptions. Reduced-order models offer a middle ground by distilling the dominant trends of high-fidelity training solutions into a form that can be quickly processed and more tightly coupled. This thesis presents a variably coupled, variable-fidelity, aerothermoelastic framework for the simulation and analysis of high-speed vehicle systems using analytical, reduced-order, and surrogate modeling techniques. Full launch-to-landing flights of complete vehicles are considered and used to define flight envelopes with aeroelastic, aerothermal, and thermoelastic limits, tune in-the-loop flight controllers, and inform future design considerations. A partitioned approach to vehicle simulation is considered in which regions dominated by particular combinations of processes are made separate from the overall solution and simulated by a specialized set of models to improve overall processing speed and overall solution fidelity. A number of enhancements to this framework are made through 1. the implementation of a publish-subscribe code architecture for rapid prototyping of physics and process models. 2. the implementation of a selection of linearization and model identification methods including high-order pseudo-time forward difference, complex-step, and direct identification from ordinary differential equation inspection. 3. improvements to the aeroheating and thermal models with non-equilibrium gas dynamics and generalized temperature dependent material thermal properties. A variety of model reduction and surrogate model techniques are applied to a representative hypersonic vehicle on a terminal trajectory to enable complete aerothermoelastic flight simulations. Multiple terminal trajectories of various starting altitudes and Mach numbers are optimized to maximize final kinetic energy of the vehicle upon reaching the surface. Surrogate models are compared to represent the variation of material thermal properties with temperature. A new method is developed and shown to be both accurate and computationally efficient. While the numerically efficient simulation of high-speed vehicles is developed within the presented framework, the goal of real time simulation is hampered by the necessity of multiple nested convergence loops. An alternative all-in-one surrogate model method is developed based on singular-value decomposition and regression that is near real time. Finally, the aeroelastic stability of pressurized cylindrical shells is investigated in the context of a maneuvering axisymmetric high-speed vehicle. Moderate internal pressurization is numerically shown to decrease stability, as showed experimentally in the literature, yet not well reproduced analytically. Insights are drawn from time simulation results and used to inform approaches for future vehicle model development.
Unbiased multi-fidelity estimate of failure probability of a free plane jet
NASA Astrophysics Data System (ADS)
Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin
2017-11-01
Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.
Denadai, Rafael; Oshiiwa, Marie; Saad-Hossne, Rogério
2014-01-01
Background: The search for alternative and effective forms of training simulation is needed due to ethical and medico-legal aspects involved in training surgical skills on living patients, human cadavers and living animals. Aims: To evaluate if the bench model fidelity interferes in the acquisition of elliptical excision skills by novice medical students. Materials and Methods: Forty novice medical students were randomly assigned to 5 practice conditions with instructor-directed elliptical excision skills’ training (n = 8): didactic materials (control); organic bench model (low-fidelity); ethylene-vinyl acetate bench model (low-fidelity); chicken legs’ skin bench model (high-fidelity); or pig foot skin bench model (high-fidelity). Pre- and post-tests were applied. Global rating scale, effect size, and self-perceived confidence based on Likert scale were used to evaluate all elliptical excision performances. Results: The analysis showed that after training, the students practicing on bench models had better performance based on Global rating scale (all P < 0.0000) and felt more confident to perform elliptical excision skills (all P < 0.0000) when compared to the control. There was no significant difference (all P > 0.05) between the groups that trained on bench models. The magnitude of the effect (basic cutaneous surgery skills’ training) was considered large (>0.80) in all measurements. Conclusion: The acquisition of elliptical excision skills after instructor-directed training on low-fidelity bench models was similar to the training on high-fidelity bench models; and there was a more substantial increase in elliptical excision performances of students that trained on all simulators compared to the learning on didactic materials. PMID:24700937
Adaptive control of a jet turboshaft engine driving a variable pitch propeller using multiple models
NASA Astrophysics Data System (ADS)
Ahmadian, Narjes; Khosravi, Alireza; Sarhadi, Pouria
2017-08-01
In this paper, a multiple model adaptive control (MMAC) method is proposed for a gas turbine engine. The model of a twin spool turbo-shaft engine driving a variable pitch propeller includes various operating points. Variations in fuel flow and propeller pitch inputs produce different operating conditions which force the controller to be adopted rapidly. Important operating points are three idle, cruise and full thrust cases for the entire flight envelope. A multi-input multi-output (MIMO) version of second level adaptation using multiple models is developed. Also, stability analysis using Lyapunov method is presented. The proposed method is compared with two conventional first level adaptation and model reference adaptive control techniques. Simulation results for JetCat SPT5 turbo-shaft engine demonstrate the performance and fidelity of the proposed method.
Climate drift of AMOC, North Atlantic salinity and arctic sea ice in CFSv2 decadal predictions
NASA Astrophysics Data System (ADS)
Huang, Bohua; Zhu, Jieshun; Marx, Lawrence; Wu, Xingren; Kumar, Arun; Hu, Zeng-Zhen; Balmaseda, Magdalena A.; Zhang, Shaoqing; Lu, Jian; Schneider, Edwin K.; Kinter, James L., III
2015-01-01
There are potential advantages to extending operational seasonal forecast models to predict decadal variability but major efforts are required to assess the model fidelity for this task. In this study, we examine the North Atlantic climate simulated by the NCEP Climate Forecast System, version 2 (CFSv2), using a set of ensemble decadal hindcasts and several 30-year simulations initialized from realistic ocean-atmosphere states. It is found that a substantial climate drift occurs in the first few years of the CFSv2 hindcasts, which represents a major systematic bias and may seriously affect the model's fidelity for decadal prediction. In particular, it is noted that a major reduction of the upper ocean salinity in the northern North Atlantic weakens the Atlantic meridional overturning circulation (AMOC) significantly. This freshening is likely caused by the excessive freshwater transport from the Arctic Ocean and weakened subtropical water transport by the North Atlantic Current. A potential source of the excessive freshwater is the quick melting of sea ice, which also causes unrealistically thin ice cover in the Arctic Ocean. Our sensitivity experiments with adjusted sea ice albedo parameters produce a sustainable ice cover with realistic thickness distribution. It also leads to a moderate increase of the AMOC strength. This study suggests that a realistic freshwater balance, including a proper sea ice feedback, is crucial for simulating the North Atlantic climate and its variability.
Gilmer, Todd P; Stefancic, Ana; Katz, Marian L; Sklar, Marisa; Tsemberis, Sam; Palinkas, Lawrence A
2014-11-01
Permanent supported housing programs are being implemented throughout the United States. This study examined the relationship between fidelity to the Housing First model and residential outcomes among clients of full service partnerships (FSPs) in California. This study had a mixed-methods design. Quantitative administrative and survey data were used to describe FSP practices and to examine the association between fidelity to Housing First and residential outcomes in the year before and after enrollment of 6,584 FSP clients in 86 programs. Focus groups at 20 FSPs provided qualitative data to enhance the understanding of these findings with actual accounts of housing-related experiences in high- and low-fidelity programs. Prior to enrollment, the mean days of homelessness were greater at high- versus low-fidelity (101 versus 46 days) FSPs. After adjustment for individual characteristics, the analysis found that days spent homeless after enrollment declined by 87 at high-fidelity programs and by 34 at low-fidelity programs. After adjustment for days spent homeless before enrollment, days spent homeless after enrollment declined by 63 at high-fidelity programs and by 53 at low-fidelity programs. After enrollment, clients at high-fidelity programs spent more than 60 additional days in apartments than clients at low-facility programs. Differences were found between high- and low-fidelity FSPs in client choice in housing and how much clients' goals were considered in housing placement. Programs with greater fidelity to the Housing First model enrolled clients with longer histories of homelessness and placed most of them in apartments.
A Novel Multiscale Physics Based Progressive Failure Methodology for Laminated Composite Structures
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Waas, Anthony M.; Bednarcyk, Brett A.; Collier, Craig S.; Yarrington, Phillip W.
2008-01-01
A variable fidelity, multiscale, physics based finite element procedure for predicting progressive damage and failure of laminated continuous fiber reinforced composites is introduced. At every integration point in a finite element model, progressive damage is accounted for at the lamina-level using thermodynamically based Schapery Theory. Separate failure criteria are applied at either the global-scale or the microscale in two different FEM models. A micromechanics model, the Generalized Method of Cells, is used to evaluate failure criteria at the micro-level. The stress-strain behavior and observed failure mechanisms are compared with experimental results for both models.
The Impact of Different Sources of Fluctuations on Mutual Information in Biochemical Networks
Chevalier, Michael; Venturelli, Ophelia; El-Samad, Hana
2015-01-01
Stochastic fluctuations in signaling and gene expression limit the ability of cells to sense the state of their environment, transfer this information along cellular pathways, and respond to it with high precision. Mutual information is now often used to quantify the fidelity with which information is transmitted along a cellular pathway. Mutual information calculations from experimental data have mostly generated low values, suggesting that cells might have relatively low signal transmission fidelity. In this work, we demonstrate that mutual information calculations might be artificially lowered by cell-to-cell variability in both initial conditions and slowly fluctuating global factors across the population. We carry out our analysis computationally using a simple signaling pathway and demonstrate that in the presence of slow global fluctuations, every cell might have its own high information transmission capacity but that population averaging underestimates this value. We also construct a simple synthetic transcriptional network and demonstrate using experimental measurements coupled to computational modeling that its operation is dominated by slow global variability, and hence that its mutual information is underestimated by a population averaged calculation. PMID:26484538
Eastern Renewable Generation Integration Study: Redefining Whatâs Possible for Renewable Energy
Bloom, Aaron
2018-01-16
NREL project manager Aaron Bloom introduces NRELâs Eastern Renewable Generation Integration Study (ERGIS) and high-performance computing capabilities and new methodologies that allowed NREL to model operations of the Eastern Interconnection at unprecedented fidelity. ERGIS shows that the Eastern Interconnection can balance the variability and uncertainty of wind and solar photovoltaics at a 5-minute level, for one simulated year.
2007-09-17
been proposed; these include a combination of variable fidelity models, parallelisation strategies and hybridisation techniques (Coello, Veldhuizen et...Coello et al (Coello, Veldhuizen et al. 2002). 4.4.2 HIERARCHICAL POPULATION TOPOLOGY A hierarchical population topology, when integrated into...to hybrid parallel Multi-Objective Evolutionary Algorithms (pMOEA) (Cantu-Paz 2000; Veldhuizen , Zydallis et al. 2003); it uses a master slave
NASA Astrophysics Data System (ADS)
Pierson, Kyle D.; Hochhalter, Jacob D.; Spear, Ashley D.
2018-05-01
Systematic correlation analysis was performed between simulated micromechanical fields in an uncracked polycrystal and the known path of an eventual fatigue-crack surface based on experimental observation. Concurrent multiscale finite-element simulation of cyclic loading was performed using a high-fidelity representation of grain structure obtained from near-field high-energy x-ray diffraction microscopy measurements. An algorithm was developed to parameterize and systematically correlate the three-dimensional (3D) micromechanical fields from simulation with the 3D fatigue-failure surface from experiment. For comparison, correlation coefficients were also computed between the micromechanical fields and hypothetical, alternative surfaces. The correlation of the fields with hypothetical surfaces was found to be consistently weaker than that with the known crack surface, suggesting that the micromechanical fields of the cyclically loaded, uncracked microstructure might provide some degree of predictiveness for microstructurally small fatigue-crack paths, although the extent of such predictiveness remains to be tested. In general, gradients of the field variables exhibit stronger correlations with crack path than the field variables themselves. Results from the data-driven approach implemented here can be leveraged in future model development for prediction of fatigue-failure surfaces (for example, to facilitate univariate feature selection required by convolution-based models).
NASA Astrophysics Data System (ADS)
Gochis, D. J.; Dugger, A. L.; Karsten, L. R.; Barlage, M. J.; Sampson, K. M.; Yu, W.; Pan, L.; McCreight, J. L.; Howard, K.; Busto, J.; Deems, J. S.
2017-12-01
Hydrometeorological processes vary over comparatively short length scales in regions of complex terrain such as the southern Rocky Mountains. Changes in temperature, precipitation, wind and solar radiation can vary significantly across elevation gradients, terrain landform and land cover conditions throughout the region. Capturing such variability in hydrologic models can necessitate the utilization of so-called `hyper-resolution' spatial meshes with effective element spacings of less than 100m. However, it is often difficult to obtain meteorological forcings of high quality in such regions at those resolutions which can result in significant uncertainty in fundamental in hydrologic model inputs. In this study we examine the comparative influences of meteorological forcing data fidelity and spatial resolution on seasonal simulations of snowpack evolution, runoff and streamflow in a set of high mountain watersheds in southern Colorado. We utilize the operational, NOAA National Water Model configuration of the community WRF-Hydro system as a baseline and compare against it, additional model scenarios with differing specifications of meteorological forcing data, with and without topographic downscaling adjustments applied, with and without experimental high resolution radar derived precipitation estimates and with WRF-Hydro configurations of progressively finer spatial resolution. The results suggest significant influence from and importance of meteorological downscaling techniques in controlling spatial distributions of meltout and runoff timing. The use of radar derived precipitation exhibits clear sensitivity on hydrologic simulation skill compared with the use of coarser resolution, background precipitation analyses. Advantages and disadvantages of the utilization of progressively higher resolution model configurations both in terms of computational requirements and model fidelity are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, J.; Lacava, W.; Austin, J.
2015-02-01
This work investigates the minimum level of fidelity required to accurately simulate wind turbine gearboxes using state-of-the-art design tools. Excessive model fidelity including drivetrain complexity, gearbox complexity, excitation sources, and imperfections, significantly increases computational time, but may not provide a commensurate increase in the value of the results. Essential designparameters are evaluated, including the planetary load-sharing factor, gear tooth load distribution, and sun orbit motion. Based on the sensitivity study results, recommendations for the minimum model fidelities are provided.
Visual long-term memory has the same limit on fidelity as visual working memory.
Brady, Timothy F; Konkle, Talia; Gill, Jonathan; Oliva, Aude; Alvarez, George A
2013-06-01
Visual long-term memory can store thousands of objects with surprising visual detail, but just how detailed are these representations, and how can one quantify this fidelity? Using the property of color as a case study, we estimated the precision of visual information in long-term memory, and compared this with the precision of the same information in working memory. Observers were shown real-world objects in random colors and were asked to recall the colors after a delay. We quantified two parameters of performance: the variability of internal representations of color (fidelity) and the probability of forgetting an object's color altogether. Surprisingly, the fidelity of color information in long-term memory was comparable to the asymptotic precision of working memory. These results suggest that long-term memory and working memory may be constrained by a common limit, such as a bound on the fidelity required to retrieve a memory representation.
NPSS Multidisciplinary Integration and Analysis
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Rasche, Joseph; Simons, Todd A.; Hoyniak, Daniel
2006-01-01
The objective of this task was to enhance the capability of the Numerical Propulsion System Simulation (NPSS) by expanding its reach into the high-fidelity multidisciplinary analysis area. This task investigated numerical techniques to convert between cold static to hot running geometry of compressor blades. Numerical calculations of blade deformations were iteratively done with high fidelity flow simulations together with high fidelity structural analysis of the compressor blade. The flow simulations were performed with the Advanced Ducted Propfan Analysis (ADPAC) code, while structural analyses were performed with the ANSYS code. High fidelity analyses were used to evaluate the effects on performance of: variations in tip clearance, uncertainty in manufacturing tolerance, variable inlet guide vane scheduling, and the effects of rotational speed on the hot running geometry of the compressor blades.
High Fidelity System Simulation of Multiple Components in Support of the UEET Program
NASA Technical Reports Server (NTRS)
Plybon, Ronald C.; VanDeWall, Allan; Sampath, Rajiv; Balasubramaniam, Mahadevan; Mallina, Ramakrishna; Irani, Rohinton
2006-01-01
The High Fidelity System Simulation effort has addressed various important objectives to enable additional capability within the NPSS framework. The scope emphasized High Pressure Turbine and High Pressure Compressor components. Initial effort was directed at developing and validating intermediate fidelity NPSS model using PD geometry and extended to high-fidelity NPSS model by overlaying detailed geometry to validate CFD against rig data. Both "feedforward" and feedback" approaches of analysis zooming was employed to enable system simulation capability in NPSS. These approaches have certain benefits and applicability in terms of specific applications "feedback" zooming allows the flow-up of information from high-fidelity analysis to be used to update the NPSS model results by forcing the NPSS solver to converge to high-fidelity analysis predictions. This apporach is effective in improving the accuracy of the NPSS model; however, it can only be used in circumstances where there is a clear physics-based strategy to flow up the high-fidelity analysis results to update the NPSS system model. "Feed-forward" zooming approach is more broadly useful in terms of enabling detailed analysis at early stages of design for a specified set of critical operating points and using these analysis results to drive design decisions early in the development process.
How to teach emergency procedural skills in an outdoor environment using low-fidelity simulation.
Saxon, Kathleen D; Kapadia, Alison P R; Juneja, Nadia S; Bassin, Benjamin S
2014-03-01
Teaching emergency procedural skills in a wilderness setting can be logistically challenging. To teach these skills as part of a wilderness medicine elective for medical students, we designed an outdoor simulation session with low-fidelity models. The session involved 6 stations in which procedural skills were taught using homemade low-fidelity simulators. At each station, the students encountered a "victim," who required an emergency procedure that was performed using the low-fidelity model. The models are easy and inexpensive to construct, and their design and implementation in the session is described here. Using low-fidelity simulation models in an outdoor setting is an effective teaching tool for emergency wilderness medicine procedures and can easily be reproduced in future wilderness medicine courses. © 2014 Wilderness Medical Society Published by Wilderness Medical Society All rights reserved.
Secure Continuous Variable Teleportation and Einstein-Podolsky-Rosen Steering
NASA Astrophysics Data System (ADS)
He, Qiongyi; Rosales-Zárate, Laura; Adesso, Gerardo; Reid, Margaret D.
2015-10-01
We investigate the resources needed for secure teleportation of coherent states. We extend continuous variable teleportation to include quantum teleamplification protocols that allow nonunity classical gains and a preamplification or postattenuation of the coherent state. We show that, for arbitrary Gaussian protocols and a significant class of Gaussian resources, two-way steering is required to achieve a teleportation fidelity beyond the no-cloning threshold. This provides an operational connection between Gaussian steerability and secure teleportation. We present practical recipes suggesting that heralded noiseless preamplification may enable high-fidelity heralded teleportation, using minimally entangled yet steerable resources.
Edmunds, Julie; Ditty, Matthew; Watkins, Jessica; Walsh, Lucia; Marcus, Steven; Kendall, Philip
2013-01-01
Among the challenges facing the mental health field are the dissemination and implementation of evidence-based practices. The present study investigated the relationships between inner context variables (i.e., adopter characteristics and individual perceptions of intra-organizational factors) and two implementation outcomes – independently rated therapist fidelity on a performance-based role-play (i.e., adherence and skill) and self-reported penetration of cognitive behavioral therapy for youth anxiety following training. A significant relationship was found between inner context variables and fidelity. Specifically, adopter characteristics were associated with adherence and skill; individual perceptions of intra-organizational factors were associated with adherence. Inner context variables were not associated with penetration. Future directions are discussed. PMID:24202067
State resolved vibrational relaxation modeling for strongly nonequilibrium flows
NASA Astrophysics Data System (ADS)
Boyd, Iain D.; Josyula, Eswar
2011-05-01
Vibrational relaxation is an important physical process in hypersonic flows. Activation of the vibrational mode affects the fundamental thermodynamic properties and finite rate relaxation can reduce the degree of dissociation of a gas. Low fidelity models of vibrational activation employ a relaxation time to capture the process at a macroscopic level. High fidelity, state-resolved models have been developed for use in continuum gas dynamics simulations based on computational fluid dynamics (CFD). By comparison, such models are not as common for use with the direct simulation Monte Carlo (DSMC) method. In this study, a high fidelity, state-resolved vibrational relaxation model is developed for the DSMC technique. The model is based on the forced harmonic oscillator approach in which multi-quantum transitions may become dominant at high temperature. Results obtained for integrated rate coefficients from the DSMC model are consistent with the corresponding CFD model. Comparison of relaxation results obtained with the high-fidelity DSMC model shows significantly less excitation of upper vibrational levels in comparison to the standard, lower fidelity DSMC vibrational relaxation model. Application of the new DSMC model to a Mach 7 normal shock wave in carbon monoxide provides better agreement with experimental measurements than the standard DSMC relaxation model.
Stimulated Brillouin scattering continuous wave phase conjugation in step-index fiber optics.
Massey, Steven M; Spring, Justin B; Russell, Timothy H
2008-07-21
Continuous wave (CW) stimulated Brillouin scattering (SBS) phase conjugation in step-index optical fibers was studied experimentally and modeled as a function of fiber length. A phase conjugate fidelity over 80% was measured from SBS in a 40 m fiber using a pinhole technique. Fidelity decreases with fiber length, and a fiber with a numerical aperture (NA) of 0.06 was found to generate good phase conjugation fidelity over longer lengths than a fiber with 0.13 NA. Modeling and experiment support previous work showing the maximum interaction length which yields a high fidelity phase conjugate beam is inversely proportional to the fiber NA(2), but find that fidelity remains high over much longer fiber lengths than previous models calculated. Conditions for SBS beam cleanup in step-index fibers are discussed.
A multi-fidelity analysis selection method using a constrained discrete optimization formulation
NASA Astrophysics Data System (ADS)
Stults, Ian C.
The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model uncertainty present in analyses with 4 or fewer input variables could be effectively quantified using a strategic distribution creation method; if more than 4 input variables exist, a Frontier Finding Particle Swarm Optimization should instead be used. Once model uncertainty in contributing analysis code choices has been quantified, a selection method is required to determine which of these choices should be used in simulations. Because much of the selection done for engineering problems is driven by the physics of the problem, these are poor candidate problems for testing the true fitness of a candidate selection method. Specifically moderate and high dimensional problems' variability can often be reduced to only a few dimensions and scalability often cannot be easily addressed. For these reasons a simple academic function was created for the uncertainty quantification, and a canonical form of the Fidelity Selection Problem (FSP) was created. Fifteen best- and worst-case scenarios were identified in an effort to challenge the candidate selection methods both with respect to the characteristics of the tradeoff between time cost and model uncertainty and with respect to the stringency of the constraints and problem dimensionality. The results from this experiment show that a Genetic Algorithm (GA) was able to consistently find the correct answer, but under certain circumstances, a discrete form of Particle Swarm Optimization (PSO) was able to find the correct answer more quickly. To better illustrate how the uncertainty quantification and discrete optimization might be conducted for a "real world" problem, an illustrative example was conducted using gas turbine engines.
A domain-decomposed multi-model plasma simulation of collisionless magnetic reconnection
NASA Astrophysics Data System (ADS)
Datta, I. A. M.; Shumlak, U.; Ho, A.; Miller, S. T.
2017-10-01
Collisionless magnetic reconnection is a process relevant to many areas of plasma physics in which energy stored in magnetic fields within highly conductive plasmas is rapidly converted into kinetic and thermal energy. Both in natural phenomena such as solar flares and terrestrial aurora as well as in magnetic confinement fusion experiments, the reconnection process is observed on timescales much shorter than those predicted by a resistive MHD model. As a result, this topic is an active area of research in which plasma models with varying fidelity have been tested in order to understand the proper physics explaining the reconnection process. In this research, a hybrid multi-model simulation employing the Hall-MHD and two-fluid plasma models on a decomposed domain is used to study this problem. The simulation is set up using the WARPXM code developed at the University of Washington, which uses a discontinuous Galerkin Runge-Kutta finite element algorithm and implements boundary conditions between models in the domain to couple their variable sets. The goal of the current work is to determine the parameter regimes most appropriate for each model to maintain sufficient physical fidelity over the whole domain while minimizing computational expense. This work is supported by a Grant from US AFOSR.
Proofreading of DNA polymerase: a new kinetic model with higher-order terminal effects
NASA Astrophysics Data System (ADS)
Song, Yong-Shun; Shu, Yao-Gen; Zhou, Xin; Ou-Yang, Zhong-Can; Li, Ming
2017-01-01
The fidelity of DNA replication by DNA polymerase (DNAP) has long been an important issue in biology. While numerous experiments have revealed details of the molecular structure and working mechanism of DNAP which consists of both a polymerase site and an exonuclease (proofreading) site, there were quite a few theoretical studies on the fidelity issue. The first model which explicitly considered both sites was proposed in the 1970s and the basic idea was widely accepted by later models. However, all these models did not systematically investigate the dominant factor on DNAP fidelity, i.e. the higher-order terminal effects through which the polymerization pathway and the proofreading pathway coordinate to achieve high fidelity. In this paper, we propose a new and comprehensive kinetic model of DNAP based on some recent experimental observations, which includes previous models as special cases. We present a rigorous and unified treatment of the corresponding steady-state kinetic equations of any-order terminal effects, and derive analytical expressions for fidelity in terms of kinetic parameters under bio-relevant conditions. These expressions offer new insights on how the higher-order terminal effects contribute substantially to the fidelity in an order-by-order way, and also show that the polymerization-and-proofreading mechanism is dominated only by very few key parameters. We then apply these results to calculate the fidelity of some real DNAPs, which are in good agreements with previous intuitive estimates given by experimentalists.
NASA Astrophysics Data System (ADS)
Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro
2018-06-01
A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.
Simple uncertainty propagation for early design phase aircraft sizing
NASA Astrophysics Data System (ADS)
Lenz, Annelise
Many designers and systems analysts are aware of the uncertainty inherent in their aircraft sizing studies; however, few incorporate methods to address and quantify this uncertainty. Many aircraft design studies use semi-empirical predictors based on a historical database and contain uncertainty -- a portion of which can be measured and quantified. In cases where historical information is not available, surrogate models built from higher-fidelity analyses often provide predictors for design studies where the computational cost of directly using the high-fidelity analyses is prohibitive. These surrogate models contain uncertainty, some of which is quantifiable. However, rather than quantifying this uncertainty, many designers merely include a safety factor or design margin in the constraints to account for the variability between the predicted and actual results. This can become problematic if a designer does not estimate the amount of variability correctly, which then can result in either an "over-designed" or "under-designed" aircraft. "Under-designed" and some "over-designed" aircraft will likely require design changes late in the process and will ultimately require more time and money to create; other "over-designed" aircraft concepts may not require design changes, but could end up being more costly than necessary. Including and propagating uncertainty early in the design phase so designers can quantify some of the errors in the predictors could help mitigate the extent of this additional cost. The method proposed here seeks to provide a systematic approach for characterizing a portion of the uncertainties that designers are aware of and propagating it throughout the design process in a procedure that is easy to understand and implement. Using Monte Carlo simulations that sample from quantified distributions will allow a systems analyst to use a carpet plot-like approach to make statements like: "The aircraft is 'P'% likely to weigh 'X' lbs or less, given the uncertainties quantified" without requiring the systems analyst to have substantial knowledge of probabilistic methods. A semi-empirical sizing study of a small single-engine aircraft serves as an example of an initial version of this simple uncertainty propagation. The same approach is also applied to a variable-fidelity concept study using a NASA-developed transonic Hybrid Wing Body aircraft.
Joseph M. Wunderle, Jr.; Patricia K. Lebow; Jennifer D. White; Dave Currie; David N. Ewert
2014-01-01
Distribution of nonbreeding migrant birds in relation to variation in food availability has been hypothesized to result from the interaction of dominance hierarchies and variable movement responses, which together may have sex- and age-specific consequences. We predicted that site fidelity, movements, and abundance of Kirtlandâs Warblers (Setophaga kirtlandii...
NASA Astrophysics Data System (ADS)
Mohrfeld-Halterman, J. A.; Uddin, M.
2016-07-01
We described in this paper the development of a high fidelity vehicle aerodynamic model to fit wind tunnel test data over a wide range of vehicle orientations. We also present a comparison between the effects of this proposed model and a conventional quasi steady-state aerodynamic model on race vehicle simulation results. This is done by implementing both of these models independently in multi-body quasi steady-state simulations to determine the effects of the high fidelity aerodynamic model on race vehicle performance metrics. The quasi steady state vehicle simulation is developed with a multi-body NASCAR Truck vehicle model, and simulations are conducted for three different types of NASCAR race tracks, a short track, a one and a half mile intermediate track, and a higher speed, two mile intermediate race track. For each track simulation, the effects of the aerodynamic model on handling, maximum corner speed, and drive force metrics are analysed. The accuracy of the high-fidelity model is shown to reduce the aerodynamic model error relative to the conventional aerodynamic model, and the increased accuracy of the high fidelity aerodynamic model is found to have realisable effects on the performance metric predictions on the intermediate tracks resulting from the quasi steady-state simulation.
Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds
NASA Astrophysics Data System (ADS)
Abdo, Mohammad Gamal Mohammad Mostafa
This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).
Continuous-variable quantum teleportation in bosonic structured environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
He Guangqiang; Zhang Jingtao; Zhu Jun
2011-09-15
The effects of dynamics of continuous-variable entanglement under the various kinds of environments on quantum teleportation are quantitatively investigated. Only under assumption of the weak system-reservoir interaction, the evolution of teleportation fidelity is analytically derived and is numerically plotted in terms of environment parameters including reservoir temperature and its spectral density, without Markovian and rotating wave approximations. We find that the fidelity of teleportation is a monotonically decreasing function for Markovian interaction in Ohmic-like environments, while it oscillates for non-Markovian ones. According to the dynamical laws of teleportation, teleportation with better performances can be implemented by selecting the appropriate time.
Uncertainty quantification for PZT bimorph actuators
NASA Astrophysics Data System (ADS)
Bravo, Nikolas; Smith, Ralph C.; Crews, John
2018-03-01
In this paper, we discuss the development of a high fidelity model for a PZT bimorph actuator used for micro-air vehicles, which includes the Robobee. We developed a high-fidelity model for the actuator using the homogenized energy model (HEM) framework, which quantifies the nonlinear, hysteretic, and rate-dependent behavior inherent to PZT in dynamic operating regimes. We then discussed an inverse problem on the model. We included local and global sensitivity analysis of the parameters in the high-fidelity model. Finally, we will discuss the results of Bayesian inference and uncertainty quantification on the HEM.
Numerical exploration of dissimilar supersonic coaxial jets mixing
NASA Astrophysics Data System (ADS)
Dharavath, Malsur; Manna, P.; Chakraborty, Debasis
2015-06-01
Mixing of two coaxial supersonic dissimilar gases in free jet environment is numerically explored. Three dimensional RANS equations with a k-ε turbulence model are solved using commercial CFD software. Two important experimental cases (RELIEF experiments) representing compressible mixing flow phenomenon under scramjet operating conditions for which detail profiles of thermochemical variables are available are taken as validation cases. Two different convective Mach numbers 0.16 and 0.70 are considered for simulations. The computed growth rate, pitot pressure and mass fraction profiles for both these cases match extremely well with experimental values and results of other high fidelity numerical results both in far field and near field regions. For higher convective Mach number predicted growth rate matches nicely with empirical Dimotakis curve; whereas for lower convective Mach number, predicted growth rate is higher. It is shown that well resolved RANS calculation can capture the mixing of two supersonic dissimilar gases better than high fidelity LES calculations.
The Role of Model Fidelity in Understanding the Food-Energy-Water Nexus at the Asset Level
NASA Astrophysics Data System (ADS)
Tidwell, V. C.; Lowry, T. S.; Behery, S.; Macknick, J.; Yang, Y. C. E.
2017-12-01
An improved understanding of the food-energy-water nexus at the asset level (e.g., power plant, irrigation ditch, water utility) is necessary for the efficient management and operations of connected infrastructure systems. Interdependencies potentially influencing the operations of a particular asset can be numerous. For example, operations of energy and agricultural assets depend on the delivery of water, which in turn depend on the physical hydrology, river/reservoir operations, water rights, the networked water infrastructure and other factors. A critical challenge becomes identification of those linkages central to the analysis of the system. Toward this need, a case study was conducted centered on the San Juan River basin, a major tributary to the Colorado River. A unique opportunity was afforded by the availability of two sets of coupled models built on the same simulation platform but formulated at distinctly different fidelities. Comparative analysis was driven by statistically downscaled climate data from three global climate models (emission scenario RCP 8.5) and planned growth in regional water demand. Precipitation was partitioned between evaporation, runoff and recharge using the Variable Infiltration Capacity (VIC) hydrologic model. Priority administration of small-scale water use of upland tributary flows was simulated using Colorado's StateMod model. Mainstem operations of the San Juan River, including releases from Navajo Reservoir, were subsequently modeled using RiverWare to estimate impacts on water deliveries, environmental flows and interbasin transfers out to the year 2100. Models differ in the spatial resolution, disaggregation of water use, infrastructure operations and representation of system dynamics. Comparisons drawn between this suite of coupled models provides insight into the value of model fidelity relative to assessing asset vulnerability to a range of uncertain growth and climate futures. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.
ERIC Educational Resources Information Center
Patton, Michael Quinn
2016-01-01
Fidelity concerns the extent to which a specific evaluation sufficiently incorporates the core characteristics of the overall approach to justify labeling that evaluation by its designated name. Fidelity has traditionally meant implementing a model in exactly the same way each time following the prescribed steps and procedures. The essential…
Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blyth, Taylor S.; Avramova, Maria
The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR)more » cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.« less
Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF
NASA Astrophysics Data System (ADS)
Blyth, Taylor S.
The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.
Challenges of Applying a Comprehensive Model of Intervention Fidelity
Bosak, Kelly; Pozehl, Bunny; Yates, Bernice
2014-01-01
Applying a comprehensive model of fidelity to interventions delivered by Information and Communication Technology (ICT) has multiple challenges. Fidelity must be considered in the design, implementation, evaluation, and reporting of the intervention. The fidelity strategies must address the unique aspects of the technology, including training providers to instruct participants to use the technology and to provide standardized feedback, rather than deliver the intervention in-person. Other challenges include the nonspecific effects resulting from participants accessing unintended content in interventions delivered by the Internet. ICT allows participant receipt and enactment of intervention skills to be assessed by electronic evidence, rather than in-person observation. Interventions using ICT, such as the Internet are unique, and there is less control of participant interaction with various electronic components. Monitoring participant use and providing standardized feedback for receipt and enactment of intervention skills is key to ensuring intervention fidelity. The final challenges involve evaluating and reporting fidelity. PMID:21474676
An Automatic Medium to High Fidelity Low-Thrust Global Trajectory Toolchain; EMTG-GMAT
NASA Technical Reports Server (NTRS)
Beeson, Ryne T.; Englander, Jacob A.; Hughes, Steven P.; Schadegg, Maximillian
2015-01-01
Solving the global optimization, low-thrust, multiple-flyby interplanetary trajectory problem with high-fidelity dynamical models requires an unreasonable amount of computational resources. A better approach, and one that is demonstrated in this paper, is a multi-step process whereby the solution of the aforementioned problem is solved at a lower-fidelity and this solution is used as an initial guess for a higher-fidelity solver. The framework presented in this work uses two tools developed by NASA Goddard Space Flight Center: the Evolutionary Mission Trajectory Generator (EMTG) and the General Mission Analysis Tool (GMAT). EMTG is a medium to medium-high fidelity low-thrust interplanetary global optimization solver, which now has the capability to automatically generate GMAT script files for seeding a high-fidelity solution using GMAT's local optimization capabilities. A discussion of the dynamical models as well as thruster and power modeling for both EMTG and GMAT are given in this paper. Current capabilities are demonstrated with examples that highlight the toolchains ability to efficiently solve the difficult low-thrust global optimization problem with little human intervention.
Surrogate based wind farm layout optimization using manifold mapping
NASA Astrophysics Data System (ADS)
Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester
2016-09-01
High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.
Multi-level emulation of complex climate model responses to boundary forcing data
NASA Astrophysics Data System (ADS)
Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter
2018-04-01
Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.
Costello, John P; Olivieri, Laura J; Krieger, Axel; Thabit, Omar; Marshall, M Blair; Yoo, Shi-Joon; Kim, Peter C; Jonas, Richard A; Nath, Dilip S
2014-07-01
The current educational approach for teaching congenital heart disease (CHD) anatomy to students involves instructional tools and techniques that have significant limitations. This study sought to assess the feasibility of utilizing present-day three-dimensional (3D) printing technology to create high-fidelity synthetic heart models with ventricular septal defect (VSD) lesions and applying these models to a novel, simulation-based educational curriculum for premedical and medical students. Archived, de-identified magnetic resonance images of five common VSD subtypes were obtained. These cardiac images were then segmented and built into 3D computer-aided design models using Mimics Innovation Suite software. An Objet500 Connex 3D printer was subsequently utilized to print a high-fidelity heart model for each VSD subtype. Next, a simulation-based educational curriculum using these heart models was developed and implemented in the instruction of 29 premedical and medical students. Assessment of this curriculum was undertaken with Likert-type questionnaires. High-fidelity VSD models were successfully created utilizing magnetic resonance imaging data and 3D printing. Following instruction with these high-fidelity models, all students reported significant improvement in knowledge acquisition (P < .0001), knowledge reporting (P < .0001), and structural conceptualization (P < .0001) of VSDs. It is feasible to use present-day 3D printing technology to create high-fidelity heart models with complex intracardiac defects. Furthermore, this tool forms the foundation for an innovative, simulation-based educational approach to teach students about CHD and creates a novel opportunity to stimulate their interest in this field. © The Author(s) 2014.
Surrogate-based Analysis and Optimization
NASA Technical Reports Server (NTRS)
Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin
2005-01-01
A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.
Report of NPSAT1 Battery Thermal Contact Resistance Testing, Modeling and Simulation
2012-10-01
lithium ion battery is the spacecraft component with the smallest temperature range of 0?C to 45?C during operation. Thermal analysis results, however, can only provide adequate results if there is sufficient fidelity in thermal modeling. Arguably, the values used in defining thermal coupling for components are the most difficult to estimate because of the many variables that define them. This document describes the work performed by the authors starting in the 2012 winter quarter as part of the SS3900 directed study course. The objectives of the study were to
NASA Technical Reports Server (NTRS)
Whiffen, Gregory J.
2006-01-01
Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.
Entanglement of coherent superposition of photon-subtraction squeezed vacuum
NASA Astrophysics Data System (ADS)
Liu, Cun-Jin; Ye, Wei; Zhou, Wei-Dong; Zhang, Hao-Liang; Huang, Jie-Hui; Hu, Li-Yun
2017-10-01
A new kind of non-Gaussian quantum state is introduced by applying nonlocal coherent superposition ( τa + sb) m of photon subtraction to two single-mode squeezed vacuum states, and the properties of entanglement are investigated according to the degree of entanglement and the average fidelity of quantum teleportation. The state can be seen as a single-variable Hermitian polynomial excited squeezed vacuum state, and its normalization factor is related to the Legendre polynomial. It is shown that, for τ = s, the maximum fidelity can be achieved, even over the classical limit (1/2), only for even-order operation m and equivalent squeezing parameters in a certain region. However, the maximum entanglement can be achieved for squeezing parameters with a π phase difference. These indicate that the optimal realizations of fidelity and entanglement could be different from one another. In addition, the parameter τ/ s has an obvious effect on entanglement and fidelity.
NASA Astrophysics Data System (ADS)
Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem
2017-11-01
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.
Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.; ...
2017-10-24
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.
Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less
Athanasiou, Thanos; Long, Susannah J; Beveridge, Iain; Sevdalis, Nick
2017-01-01
Objectives Frontline insights into care delivery correlate with patients’ clinical outcomes. These outcomes might be improved through near-real time identification and mitigation of staff concerns. We evaluated the effects of a prospective frontline surveillance system on patient and team outcomes. Design Prospective, stepped wedge, non-randomised, cluster controlled trial; prespecified per protocol analysis for high-fidelity intervention delivery. Participants Seven interdisciplinary medical ward teams from two hospitals in the UK. Intervention Prospective clinical team surveillance (PCTS): structured daily interdisciplinary briefings to capture staff concerns, with organisational facilitation and feedback. Main measures The primary outcome was excess length of stay (eLOS): an admission more than 24 hours above the local average for comparable patients. Secondary outcomes included safety and teamwork climates, and incident reporting. Mixed-effects models adjusted for time effects, age, comorbidity, palliation status and ward admissions. Safety and teamwork climates were measured with the Safety Attitudes Questionnaire. High-fidelity PCTS delivery comprised high engagement and high briefing frequency. Results Implementation fidelity was variable, both in briefing frequency (median 80% working days/month, IQR 65%–90%) and engagement (median 70 issues/ward/month, IQR 34–113). 1714/6518 (26.3%) intervention admissions had eLOS versus 1279/4927 (26.0%) control admissions, an absolute risk increase of 0.3%. PCTS increased eLOS in the adjusted intention-to-treat model (OR 1.32, 95% CI 1.10 to 1.58, p=0.003). Conversely, high-fidelity PCTS reduced eLOS (OR 0.79, 95% CI 0.67 to 0.94, p=0.006). High-fidelity PCTS also increased total, high-yield and non-nurse incident reports (incidence rate ratios 1.28–1.79, all p<0.002). Sustained PCTS significantly improved safety and teamwork climates over time. Conclusions This study highlighted the potential benefits and pitfalls of ward-level interdisciplinary interventions. While these interventions can improve care delivery in complex, fluid environments, the manner of their implementation is paramount. Suboptimal implementation may have an unexpectedly negative impact on performance. Trial registration number ISRCTN 34806867 (http://www.isrctn.com/ISRCTN34806867). PMID:28720612
High fidelity quantum gates with vibrational qubits.
Berrios, Eduardo; Gruebele, Martin; Shyshlov, Dmytro; Wang, Lei; Babikov, Dmitri
2012-11-26
Physical implementation of quantum gates acting on qubits does not achieve a perfect fidelity of 1. The actual output qubit may not match the targeted output of the desired gate. According to theoretical estimates, intrinsic gate fidelities >99.99% are necessary so that error correction codes can be used to achieve perfect fidelity. Here we test what fidelity can be accomplished for a CNOT gate executed by a shaped ultrafast laser pulse interacting with vibrational states of the molecule SCCl(2). This molecule has been used as a test system for low-fidelity calculations before. To make our test more stringent, we include vibrational levels that do not encode the desired qubits but are close enough in energy to interfere with population transfer by the laser pulse. We use two complementary approaches: optimal control theory determines what the best possible pulse can do; a more constrained physical model calculates what an experiment likely can do. Optimal control theory finds pulses with fidelity >0.9999, in excess of the quantum error correction threshold with 8 × 10(4) iterations. On the other hand, the physical model achieves only 0.9992 after 8 × 10(4) iterations. Both calculations converge as an inverse power law toward unit fidelity after >10(2) iterations/generations. In principle, the fidelities necessary for quantum error correction are reachable with qubits encoded by molecular vibrations. In practice, it will be challenging with current laboratory instrumentation because of slow convergence past fidelities of 0.99.
NASA Technical Reports Server (NTRS)
Arnold, Steven M. (Technical Monitor); Bansal, Yogesh; Pindera, Marek-Jerzy
2004-01-01
The High-Fidelity Generalized Method of Cells is a new micromechanics model for unidirectionally reinforced periodic multiphase materials that was developed to overcome the original model's shortcomings. The high-fidelity version predicts the local stress and strain fields with dramatically greater accuracy relative to the original model through the use of a better displacement field representation. Herein, we test the high-fidelity model's predictive capability in estimating the elastic moduli of periodic composites characterized by repeating unit cells obtained by rotation of an infinite square fiber array through an angle about the fiber axis. Such repeating unit cells may contain a few or many fibers, depending on the rotation angle. In order to analyze such multi-inclusion repeating unit cells efficiently, the high-fidelity micromechanics model's framework is reformulated using the local/global stiffness matrix approach. The excellent agreement with the corresponding results obtained from the standard transformation equations confirms the new model's predictive capability for periodic composites characterized by multi-inclusion repeating unit cells lacking planes of material symmetry. Comparison of the effective moduli and local stress fields with the corresponding results obtained from the original Generalized Method of Cells dramatically highlights the original model's shortcomings for certain classes of unidirectional composites.
Orbit Stability of OSIRIS-REx in the Vicinity of Bennu Using a High-Fidelity Solar Radiation Model
NASA Technical Reports Server (NTRS)
Williams, Trevor; Hughes, Kyle; Mashiku, Alinda; Longuski, James
2015-01-01
The OSIRIS-REx mission (Origins Spectral Interpretation Resource Identification Security Regolith EXPlorer) is an asteroid sample return mission to Bennu (RQ36) that is scheduled to launch in 2016. The planned science operations precluding the small retrieval involve operations in terminator orbits (orbit plane is perpendicular to the sun). Over longer durations the solar radiation pressure (SRP) perturbs the orbit causing it to precess. Our work involves: modeling high fidelity SRP model to capture the perturbations during attitude changes; design a stable orbit from the high fidelity models to analyze the stability over time.
NASA Astrophysics Data System (ADS)
Lee, H.
2016-12-01
Precipitation is one of the most important climate variables that are taken into account in studying regional climate. Nevertheless, how precipitation will respond to a changing climate and even its mean state in the current climate are not well represented in regional climate models (RCMs). Hence, comprehensive and mathematically rigorous methodologies to evaluate precipitation and related variables in multiple RCMs are required. The main objective of the current study is to evaluate the joint variability of climate variables related to model performance in simulating precipitation and condense multiple evaluation metrics into a single summary score. We use multi-objective optimization, a mathematical process that provides a set of optimal tradeoff solutions based on a range of evaluation metrics, to characterize the joint representation of precipitation, cloudiness and insolation in RCMs participating in the North American Regional Climate Change Assessment Program (NARCCAP) and Coordinated Regional Climate Downscaling Experiment-North America (CORDEX-NA). We also leverage ground observations, NASA satellite data and the Regional Climate Model Evaluation System (RCMES). Overall, the quantitative comparison of joint probability density functions between the three variables indicates that performance of each model differs markedly between sub-regions and also shows strong seasonal dependence. Because of the large variability across the models, it is important to evaluate models systematically and make future projections using only models showing relatively good performance. Our results indicate that the optimized multi-model ensemble always shows better performance than the arithmetic ensemble mean and may guide reliable future projections.
High-Fidelity Simulations of Electromagnetic Propagation and RF Communication Systems
2017-05-01
addition to high -fidelity RF propagation modeling, lower-fidelity mod- els, which are less computationally burdensome, are available via a C++ API...expensive to perform, requiring roughly one hour of computer time with 36 available cores and ray tracing per- formed by a single high -end GPU...ER D C TR -1 7- 2 Military Engineering Applied Research High -Fidelity Simulations of Electromagnetic Propagation and RF Communication
DDDAMS-based Urban Surveillance and Crowd Control via UAVs and UGVs
2015-12-04
for crowd dynamics modeling by incorporating multi-resolution data, where a grid-based method is used to model crowd motion with UAVs’ low -resolution...information and more computational intensive (and time-consuming). Given that the deployment of fidelity selection results in simulation faces computational... low fidelity information FOV y (A) DR x (A) DR y (A) Not detected high fidelity information Table 1: Parameters for UAV and UGV for their detection
Variables affecting learning in a simulation experience: a mixed methods study.
Beischel, Kelly P
2013-02-01
The primary purpose of this study was to test a hypothesized model describing the direct effects of learning variables on anxiety and cognitive learning outcomes in a high-fidelity simulation (HFS) experience. The secondary purpose was to explain and explore student perceptions concerning the qualities and context of HFS affecting anxiety and learning. This study used a mixed methods quantitative-dominant explanatory design with concurrent qualitative data collection to examine variables affecting learning in undergraduate, beginning nursing students (N = 124). Being ready to learn, having a strong auditory-verbal learning style, and being prepared for simulation directly affected anxiety, whereas learning outcomes were directly affected by having strong auditory-verbal and hands-on learning styles. Anxiety did not quantitatively mediate cognitive learning outcomes as theorized, although students qualitatively reported debilitating levels of anxiety. This study advances nursing education science by providing evidence concerning variables affecting learning outcomes in HFS.
NASA Astrophysics Data System (ADS)
Liu, Jiechao; Jayakumar, Paramsothy; Stein, Jeffrey L.; Ersal, Tulga
2016-11-01
This paper investigates the level of model fidelity needed in order for a model predictive control (MPC)-based obstacle avoidance algorithm to be able to safely and quickly avoid obstacles even when the vehicle is close to its dynamic limits. The context of this work is large autonomous ground vehicles that manoeuvre at high speed within unknown, unstructured, flat environments and have significant vehicle dynamics-related constraints. Five different representations of vehicle dynamics models are considered: four variations of the two degrees-of-freedom (DoF) representation as lower fidelity models and a fourteen DoF representation with combined-slip Magic Formula tyre model as a higher fidelity model. It is concluded that the two DoF representation that accounts for tyre nonlinearities and longitudinal load transfer is necessary for the MPC-based obstacle avoidance algorithm in order to operate the vehicle at its limits within an environment that includes large obstacles. For less challenging environments, however, the two DoF representation with linear tyre model and constant axle loads is sufficient.
Monogamy relation in multipartite continuous-variable quantum teleportation
NASA Astrophysics Data System (ADS)
Lee, Jaehak; Ji, Se-Wan; Park, Jiyong; Nha, Hyunchul
2016-12-01
Quantum teleportation (QT) is a fundamentally remarkable communication protocol that also finds many important applications for quantum informatics. Given a quantum entangled resource, it is crucial to know to what extent one can accomplish the QT. This is usually assessed in terms of output fidelity, which can also be regarded as an operational measure of entanglement. In the case of multipartite communication when each communicator possesses a part of an N -partite entangled state, not all pairs of communicators can achieve a high fidelity due to the monogamy property of quantum entanglement. We here investigate how such a monogamy relation arises in multipartite continuous-variable (CV) teleportation, particularly when using a Gaussian entangled state. We show a strict monogamy relation, i.e., a sender cannot achieve a fidelity higher than optimal cloning limit with more than one receiver. While this seems rather natural owing to the no-cloning theorem, a strict monogamy relation still holds even if the sender is allowed to individually manipulate the reduced state in collaboration with each receiver to improve fidelity. The local operations are further extended to non-Gaussian operations such as photon subtraction and addition, and we demonstrate that the Gaussian cloning bound cannot be beaten by more than one pair of communicators. Furthermore, we investigate a quantitative form of monogamy relation in terms of teleportation capability, for which we show that a faithful monogamy inequality does not exist.
Replication stress affects the fidelity of nucleosome-mediated epigenetic inheritance
Li, Wenzhu; Yi, Jia; Agbu, Pamela; Zhou, Zheng; Kelley, Richard L.; Jia, Songtao
2017-01-01
The fidelity of epigenetic inheritance or, the precision by which epigenetic information is passed along, is an essential parameter for measuring the effectiveness of the process. How the precision of the process is achieved or modulated, however, remains largely elusive. We have performed quantitative measurement of epigenetic fidelity, using position effect variegation (PEV) in Schizosaccharomyces pombe as readout, to explore whether replication perturbation affects nucleosome-mediated epigenetic inheritance. We show that replication stresses, due to either hydroxyurea treatment or various forms of genetic lesions of the replication machinery, reduce the inheritance accuracy of CENP-A/Cnp1 nucleosome positioning within centromere. Mechanistically, we demonstrate that excessive formation of single-stranded DNA, a common molecular abnormality under these conditions, might have correlation with the reduction in fidelity of centromeric chromatin duplication. Furthermore, we show that replication stress broadly changes chromatin structure at various loci in the genome, such as telomere heterochromatin expanding and mating type locus heterochromatin spreading out of the boundaries. Interestingly, the levels of inheritable expanding at sub-telomeric heterochromatin regions are highly variable among independent cell populations. Finally, we show that HU treatment of the multi-cellular organisms C. elegans and D. melanogaster affects epigenetically programmed development and PEV, illustrating the evolutionary conservation of the phenomenon. Replication stress, in addition to its demonstrated role in genetic instability, promotes variable epigenetic instability throughout the epigenome. PMID:28749973
Physiological Based Simulator Fidelity Design Guidance
NASA Technical Reports Server (NTRS)
Schnell, Thomas; Hamel, Nancy; Postnikov, Alex; Hoke, Jaclyn; McLean, Angus L. M. Thom, III
2012-01-01
The evolution of the role of flight simulation has reinforced assumptions in aviation that the degree of realism in a simulation system directly correlates to the training benefit, i.e., more fidelity is always better. The construct of fidelity has several dimensions, including physical fidelity, functional fidelity, and cognitive fidelity. Interaction of different fidelity dimensions has an impact on trainee immersion, presence, and transfer of training. This paper discusses research results of a recent study that investigated if physiological-based methods could be used to determine the required level of simulator fidelity. Pilots performed a relatively complex flight task consisting of mission task elements of various levels of difficulty in a fixed base flight simulator and a real fighter jet trainer aircraft. Flight runs were performed using one forward visual channel of 40 deg. field of view for the lowest level of fidelity, 120 deg. field of view for the middle level of fidelity, and unrestricted field of view and full dynamic acceleration in the real airplane. Neuro-cognitive and physiological measures were collected under these conditions using the Cognitive Avionics Tool Set (CATS) and nonlinear closed form models for workload prediction were generated based on these data for the various mission task elements. One finding of the work described herein is that simple heart rate is a relatively good predictor of cognitive workload, even for short tasks with dynamic changes in cognitive loading. Additionally, we found that models that used a wide range of physiological and neuro-cognitive measures can further boost the accuracy of the workload prediction.
Augustsson, Hanna; von Thiele Schwarz, Ulrica; Stenfors-Hayes, Terese; Hasson, Henna
2015-06-01
The workplace has been suggested as an important arena for health promotion, but little is known about how the organizational setting influences the implementation of interventions. The aims of this study are to evaluate implementation fidelity in an organizational-level occupational health intervention and to investigate possible explanations for variations in fidelity between intervention units. The intervention consisted of an integration of health promotion, occupational health and safety, and a system for continuous improvements (Kaizen) and was conducted in a quasi-experimental design at a Swedish hospital. Implementation fidelity was evaluated with the Conceptual Framework for Implementation Fidelity and implementation factors used to investigate variations in fidelity with the Framework for Evaluating Organizational-level Interventions. A multi-method approach including interviews, Kaizen notes, and questionnaires was applied. Implementation fidelity differed between units even though the intervention was introduced and supported in the same way. Important differences in all elements proposed in the model for evaluating organizational-level interventions, i.e., context, intervention, and mental models, were found to explain the differences in fidelity. Implementation strategies may need to be adapted depending on the local context. Implementation fidelity, as well as pre-intervention implementation elements, is likely to affect the implementation success and needs to be assessed in intervention research. The high variation in fidelity across the units indicates the need for adjustments to the type of designs used to assess the effects of interventions. Thus, rather than using designs that aim to control variation, it may be necessary to use those that aim at exploring and explaining variation, such as adapted study designs.
System and method for the adaptive mapping of matrix data to sets of polygons
NASA Technical Reports Server (NTRS)
Burdon, David (Inventor)
2003-01-01
A system and method for converting bitmapped data, for example, weather data or thermal imaging data, to polygons is disclosed. The conversion of the data into polygons creates smaller data files. The invention is adaptive in that it allows for a variable degree of fidelity of the polygons. Matrix data is obtained. A color value is obtained. The color value is a variable used in the creation of the polygons. A list of cells to check is determined based on the color value. The list of cells to check is examined in order to determine a boundary list. The boundary list is then examined to determine vertices. The determination of the vertices is based on a prescribed maximum distance. When drawn, the ordered list of vertices create polygons which depict the cell data. The data files which include the vertices for the polygons are much smaller than the corresponding cell data files. The fidelity of the polygon representation can be adjusted by repeating the logic with varying fidelity values to achieve a given maximum file size or a maximum number of vertices per polygon.
A Comparative Study of High and Low Fidelity Fan Models for Turbofan Engine System Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1991-01-01
In this paper, a heterogeneous propulsion system simulation method is presented. The method is based on the formulation of a cycle model of a gas turbine engine. The model includes the nonlinear characteristics of the engine components via use of empirical data. The potential to simulate the entire engine operation on a computer without the aid of data is demonstrated by numerically generating "performance maps" for a fan component using two flow models of varying fidelity. The suitability of the fan models were evaluated by comparing the computed performance with experimental data. A discussion of the potential benefits and/or difficulties in connecting simulations solutions of differing fidelity is given.
Fidelity and outcomes in six integrated dual disorders treatment programs.
Chandler, Daniel W
2011-02-01
Fidelity scores and outcomes were measured in six outpatient programs in California which implemented Integrated Dual Disorders Treatment (IDDT). Outcomes were measured for 1 year in four sites and 2 years in two sites; fidelity was assessed at 6 month intervals. Three of the six sites achieved high fidelity (at least a 4 on a 5 point fidelity scale) and three moderate fidelity (at least a 3). Retention in treatment, mental health functioning, stage of substance abuse treatment, abstinence, and psychiatric hospitalization were measured. Outcomes for individual programs were generally positive but not consistent within programs or across programs. Using pooled data in a longitudinal regression model with random effects at person level and adjustment of standard errors for clustering by site, change over time was not statistically significant for the primary outcomes. Fidelity scores had limited association with positive outcomes.
Gate sequence for continuous variable one-way quantum computation
Su, Xiaolong; Hao, Shuhong; Deng, Xiaowei; Ma, Lingyu; Wang, Meihong; Jia, Xiaojun; Xie, Changde; Peng, Kunchi
2013-01-01
Measurement-based one-way quantum computation using cluster states as resources provides an efficient model to perform computation and information processing of quantum codes. Arbitrary Gaussian quantum computation can be implemented sufficiently by long single-mode and two-mode gate sequences. However, continuous variable gate sequences have not been realized so far due to an absence of cluster states larger than four submodes. Here we present the first continuous variable gate sequence consisting of a single-mode squeezing gate and a two-mode controlled-phase gate based on a six-mode cluster state. The quantum property of this gate sequence is confirmed by the fidelities and the quantum entanglement of two output modes, which depend on both the squeezing and controlled-phase gates. The experiment demonstrates the feasibility of implementing Gaussian quantum computation by means of accessible gate sequences.
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
2002-01-01
A multifunctional interface method with capabilities for variable-fidelity modeling and multiple method analysis is presented. The methodology provides an effective capability by which domains with diverse idealizations can be modeled independently to exploit the advantages of one approach over another. The multifunctional method is used to couple independently discretized subdomains, and it is used to couple the finite element and the finite difference methods. The method is based on a weighted residual variational method and is presented for two-dimensional scalar-field problems. A verification test problem and a benchmark application are presented, and the computational implications are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong Yuli; Zou Xubo; Guo Guangcan
We investigate the economical Gaussian cloning of coherent states with the known phase, which produces M copies from N input replica and can be implemented with degenerate parametric amplifiers and beam splitters.The achievable fidelity of single copy is given by 2M{radical}(N)/[{radical}(N)(M-1)+{radical}((1+N)(M{sup 2}+N))], which is bigger than the optimal fidelity of the universal Gaussian cloning. The cloning machine presented here works without ancillary optical modes and can be regarded as the continuous variable generalization of the economical cloning machine for qudits.
Evaluating intervention fidelity: an example from a high-intensity interval training study.
Taylor, Kathryn L; Weston, Matthew; Batterham, Alan M
2015-01-01
Intervention fidelity refers to the degree to which an experimental manipulation has been implemented as intended, but simple, robust methods for quantifying fidelity have not been well documented. Therefore, we aim to illustrate a rigorous quantitative evaluation of intervention fidelity, using data collected during a high-intensity interval training intervention. Single-group measurement study. Seventeen adolescents (mean age ± standard deviation [SD] 14.0 ± 0.3 years) attended a 10-week high-intensity interval training intervention, comprising two exercise sessions per week. Sessions consisted of 4-7 45-s maximal effort repetitions, interspersed with 90-s rest. We collected heart rate data at 5-s intervals and recorded the peak heart rate for each repetition. The high-intensity exercise criterion was ≥ 90% of individual maximal heart rate. For each participant, we calculated the proportion of total exercise repetitions exceeding this threshold. A linear mixed model was applied to properly separate the variability in peak heart rate between- and within-subjects. Results are presented both as intention to treat (including missed sessions) and per protocol (only participants with 100% attendance; n=8). For intention to treat, the median (interquartile range) proportion of repetitions meeting the high-intensity criterion was 58% (42% to 68%). The mean peak heart rate was 85% of maximal, with a between-subject SD of 7.8 (95% confidence interval 5.4 to 11.3) percentage points and a within-subject SD of 15.1 (14.6 to 15.6) percentage points. For the per protocol analysis, the median proportion of high-intensity repetitions was 68% (47% to 86%). The mean peak heart rate was 91% of maximal, with between- and within-subject SDs of 3.1 (-1.3 to 4.6) and 3.4 (3.2 to 3.6) percentage points, respectively. Synthesising information on exercise session attendance and compliance (exercise intensity) quantifies the intervention dose and informs evaluations of treatment fidelity.
Economical Unsteady High-Fidelity Aerodynamics for Structural Optimization with a Flutter Constraint
NASA Technical Reports Server (NTRS)
Bartels, Robert E.; Stanford, Bret K.
2017-01-01
Structural optimization with a flutter constraint for a vehicle designed to fly in the transonic regime is a particularly difficult task. In this speed range, the flutter boundary is very sensitive to aerodynamic nonlinearities, typically requiring high-fidelity Navier-Stokes simulations. However, the repeated application of unsteady computational fluid dynamics to guide an aeroelastic optimization process is very computationally expensive. This expense has motivated the development of methods that incorporate aspects of the aerodynamic nonlinearity, classical tools of flutter analysis, and more recent methods of optimization. While it is possible to use doublet lattice method aerodynamics, this paper focuses on the use of an unsteady high-fidelity aerodynamic reduced order model combined with successive transformations that allows for an economical way of utilizing high-fidelity aerodynamics in the optimization process. This approach is applied to the common research model wing structural design. As might be expected, the high-fidelity aerodynamics produces a heavier wing than that optimized with doublet lattice aerodynamics. It is found that the optimized lower skin of the wing using high-fidelity aerodynamics differs significantly from that using doublet lattice aerodynamics.
Android application for determining surgical variables in brain-tumor resection procedures
Vijayan, Rohan C.; Thompson, Reid C.; Chambless, Lola B.; Morone, Peter J.; He, Le; Clements, Logan W.; Griesenauer, Rebekah H.; Kang, Hakmook; Miga, Michael I.
2017-01-01
Abstract. The fidelity of image-guided neurosurgical procedures is often compromised due to the mechanical deformations that occur during surgery. In recent work, a framework was developed to predict the extent of this brain shift in brain-tumor resection procedures. The approach uses preoperatively determined surgical variables to predict brain shift and then subsequently corrects the patient’s preoperative image volume to more closely match the intraoperative state of the patient’s brain. However, a clinical workflow difficulty with the execution of this framework is the preoperative acquisition of surgical variables. To simplify and expedite this process, an Android, Java-based application was developed for tablets to provide neurosurgeons with the ability to manipulate three-dimensional models of the patient’s neuroanatomy and determine an expected head orientation, craniotomy size and location, and trajectory to be taken into the tumor. These variables can then be exported for use as inputs to the biomechanical model associated with the correction framework. A multisurgeon, multicase mock trial was conducted to compare the accuracy of the virtual plan to that of a mock physical surgery. It was concluded that the Android application was an accurate, efficient, and timely method for planning surgical variables. PMID:28331887
Android application for determining surgical variables in brain-tumor resection procedures.
Vijayan, Rohan C; Thompson, Reid C; Chambless, Lola B; Morone, Peter J; He, Le; Clements, Logan W; Griesenauer, Rebekah H; Kang, Hakmook; Miga, Michael I
2017-01-01
The fidelity of image-guided neurosurgical procedures is often compromised due to the mechanical deformations that occur during surgery. In recent work, a framework was developed to predict the extent of this brain shift in brain-tumor resection procedures. The approach uses preoperatively determined surgical variables to predict brain shift and then subsequently corrects the patient's preoperative image volume to more closely match the intraoperative state of the patient's brain. However, a clinical workflow difficulty with the execution of this framework is the preoperative acquisition of surgical variables. To simplify and expedite this process, an Android, Java-based application was developed for tablets to provide neurosurgeons with the ability to manipulate three-dimensional models of the patient's neuroanatomy and determine an expected head orientation, craniotomy size and location, and trajectory to be taken into the tumor. These variables can then be exported for use as inputs to the biomechanical model associated with the correction framework. A multisurgeon, multicase mock trial was conducted to compare the accuracy of the virtual plan to that of a mock physical surgery. It was concluded that the Android application was an accurate, efficient, and timely method for planning surgical variables.
Further Validation of the Pathways Housing First Fidelity Scale.
Goering, Paula; Veldhuizen, Scott; Nelson, Geoffrey B; Stefancic, Ana; Tsemberis, Sam; Adair, Carol E; Distasio, Jino; Aubry, Tim; Stergiopoulos, Vicky; Streiner, David L
2016-01-01
This study examined whether Housing First fidelity ratings correspond to program operation descriptions from administrative data and predict client outcomes. A multisite, randomized controlled trial (At Home/Chez Soi) in five Canadian cities included two assessments of 12 programs over two years. Outcomes for 1,158 clients were measured every six months. Associations between fidelity ratings and administrative data (Spearman correlations) and participant outcomes (mixed-effects modeling) were examined. Fidelity ratings were generally good (mean ± SD=136.6 ± 10.3 out of a possible range of 38-152; 87% of maximum value). Fidelity was significantly associated with three of four measures of program operation, with correlations between .55 and .60. Greater program fidelity was associated with improvement in housing stability, community functioning, and quality of life. Variation in program fidelity was associated with operations and outcomes, supporting scale validity and intervention effectiveness. These findings reinforced the value of using fidelity monitoring to conduct quality assurance and technical assistance activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glascoe, Lee; Gowardhan, Akshay; Lennox, Kristin
In the interest of promoting the international exchange of technical expertise, the US Department of Energy’s Office of Emergency Operations (NA-40) and the French Commissariat à l'Energie Atomique et aux énergies alternatives (CEA) requested that the National Atmospheric Release Advisory Center (NARAC) of Lawrence Livermore National Laboratory (LLNL) in Livermore, California host a joint table top exercise with experts in emergency management and atmospheric transport modeling. In this table top exercise, LLNL and CEA compared each other’s flow and dispersion models. The goal of the comparison is to facilitate the exchange of knowledge, capabilities, and practices, and to demonstrate themore » utility of modeling dispersal at different levels of computational fidelity. Two modeling approaches were examined, a regional scale modeling approach, appropriate for simple terrain and/or very large releases, and an urban scale modeling approach, appropriate for small releases in a city environment. This report is a summary of LLNL and CEA modeling efforts from this exercise. Two different types of LLNL and CEA models were employed in the analysis: urban-scale models (Aeolus CFD at LLNL/NARAC and Parallel- Micro-SWIFT-SPRAY, PMSS, at CEA) for analysis of a 5,000 Ci radiological release and Lagrangian Particle Dispersion Models (LODI at LLNL/NARAC and PSPRAY at CEA) for analysis of a much larger (500,000 Ci) regional radiological release. Two densely-populated urban locations were chosen: Chicago with its high-rise skyline and gridded street network and Paris with its more consistent, lower building height and complex unaligned street network. Each location was considered under early summer daytime and nighttime conditions. Different levels of fidelity were chosen for each scale: (1) lower fidelity mass-consistent diagnostic, intermediate fidelity Navier-Stokes RANS models, and higher fidelity Navier-Stokes LES for urban-scale analysis, and (2) lower-fidelity single-profile meteorology versus higher-fidelity three-dimensional gridded weather forecast for regional-scale analysis. Tradeoffs between computation time and the fidelity of the results are discussed for both scales. LES, for example, requires nearly 100 times more processor time than the mass-consistent diagnostic model or the RANS model, and seems better able to capture flow entrainment behind tall buildings. As anticipated, results obtained by LLNL and CEA at regional scale around Chicago and Paris look very similar in terms of both atmospheric dispersion of the radiological release and total effective dose. Both LLNL and CEA used the same meteorological data, Lagrangian particle dispersion models, and the same dose coefficients. LLNL and CEA urban-scale modeling results show consistent phenomenological behavior and predict similar impacted areas even though the detailed 3D flow patterns differ, particularly for the Chicago cases where differences in vertical entrainment behind tall buildings are particularly notable. Although RANS and LES (LLNL) models incorporate more detailed physics than do mass-consistent diagnostic flow models (CEA), it is not possible to reach definite conclusions about the prediction fidelity of the various models as experimental measurements were not available for comparison. Stronger conclusions about the relative performances of the models involved and evaluation of the tradeoffs involved in model simplification could be made with a systematic benchmarking of urban-scale modeling. This could be the purpose of a future US / French collaborative exercise.« less
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2018-02-01
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.
NASA Astrophysics Data System (ADS)
Butchart, Neal; Anstey, James A.; Hamilton, Kevin; Osprey, Scott; McLandress, Charles; Bushell, Andrew C.; Kawatani, Yoshio; Kim, Young-Ha; Lott, Francois; Scinocca, John; Stockdale, Timothy N.; Andrews, Martin; Bellprat, Omar; Braesicke, Peter; Cagnazzo, Chiara; Chen, Chih-Chieh; Chun, Hye-Yeong; Dobrynin, Mikhail; Garcia, Rolando R.; Garcia-Serrano, Javier; Gray, Lesley J.; Holt, Laura; Kerzenmacher, Tobias; Naoe, Hiroaki; Pohlmann, Holger; Richter, Jadwiga H.; Scaife, Adam A.; Schenzinger, Verena; Serva, Federico; Versick, Stefan; Watanabe, Shingo; Yoshida, Kohei; Yukimoto, Seiji
2018-03-01
The Stratosphere-troposphere Processes And their Role in Climate (SPARC) Quasi-Biennial Oscillation initiative (QBOi) aims to improve the fidelity of tropical stratospheric variability in general circulation and Earth system models by conducting coordinated numerical experiments and analysis. In the equatorial stratosphere, the QBO is the most conspicuous mode of variability. Five coordinated experiments have therefore been designed to (i) evaluate and compare the verisimilitude of modelled QBOs under present-day conditions, (ii) identify robustness (or alternatively the spread and uncertainty) in the simulated QBO response to commonly imposed changes in model climate forcings (e.g. a doubling of CO2 amounts), and (iii) examine model dependence of QBO predictability. This paper documents these experiments and the recommended output diagnostics. The rationale behind the experimental design and choice of diagnostics is presented. To facilitate scientific interpretation of the results in other planned QBOi studies, consistent descriptions of the models performing each experiment set are given, with those aspects particularly relevant for simulating the QBO tabulated for easy comparison.
Robb, Sheri L; Burns, Debra S; Docherty, Sharron L; Haase, Joan E
2011-11-01
The Stories and Music for Adolescent/Young Adult Resilience during Transplant (SMART) study (R01NR008583; U10CA098543; U10CA095861) is an ongoing multi-site Children's Oncology Group randomized clinical trial testing the efficacy of a therapeutic music video intervention for adolescents/young adults (11-24 years of age) with cancer undergoing stem cell transplant. Treatment fidelity strategies from our trial are consistent with the National Institutes of Health (NIH) Behavior Change Consortium Treatment Fidelity Workgroup (BCC) recommendations and provide a successful working model for treatment fidelity implementation in a large, multi-site behavioral intervention study. In this paper, we summarize 20 specific treatment fidelity strategies used in the SMART trial and how these strategies correspond with NIH BCC recommendations in five specific areas: (1) study design, (2) training providers, (3) delivery of treatment, (4) receipt of treatment, and (5) enactment of treatment skills. Increased use and reporting of treatment fidelity procedures is essential in advancing the reliability and validity of behavioral intervention research. The SMART trial provides a strong model for the application of fidelity strategies to improve scientific findings and addresses the absence of published literature, illustrating the application of BCC recommendations in behavioral intervention studies. Copyright © 2010 John Wiley & Sons, Ltd.
A methodology for the assessment of manned flight simulator fidelity
NASA Technical Reports Server (NTRS)
Hess, Ronald A.; Malsbury, Terry N.
1989-01-01
A relatively simple analytical methodology for assessing the fidelity of manned flight simulators for specific vehicles and tasks is offered. The methodology is based upon an application of a structural model of the human pilot, including motion cue effects. In particular, predicted pilot/vehicle dynamic characteristics are obtained with and without simulator limitations. A procedure for selecting model parameters can be implemented, given a probable pilot control strategy. In analyzing a pair of piloting tasks for which flight and simulation data are available, the methodology correctly predicted the existence of simulator fidelity problems. The methodology permitted the analytical evaluation of a change in simulator characteristics and indicated that a major source of the fidelity problems was a visual time delay in the simulation.
A Planar Quasi-Static Constraint Mode Tire Model
2015-07-10
strikes a balance between simple tire models that lack the fidelity to make accurate chassis load predictions and computationally intensive models that...strikes a balance between heuristic tire models (such as a linear point-follower) that lack the fidelity to make accurate chassis load predictions...UNCLASSIFIED: Distribution Statement A. Cleared for public release A PLANAR QUASI-STATIC CONSTRAINT MODE TIRE MODEL Rui Maa John B. Ferris
An Analysis of the Educational Value of Low-Fidelity Anatomy Models as External Representations
ERIC Educational Resources Information Center
Chan, Lap Ki; Cheng, Maurice M. W.
2011-01-01
Although high-fidelity digital models of human anatomy based on actual cross-sectional images of the human body have been developed, reports on the use of physical models in anatomy teaching continue to appear. This article aims to examine the common features shared by these physical models and analyze their educational value based on the…
Development and validation of a Housing First fidelity survey.
Gilmer, Todd P; Stefancic, Ana; Sklar, Marisa; Tsemberis, Sam
2013-09-01
Programs that use the Housing First model are being implemented throughout the United States and internationally. The authors describe the development and validation of a Housing First fidelity survey. A 46-item survey was developed to measure fidelity across five domains: housing process and structure, separation of housing and services, service philosophy, service array, and team structure. The survey was administered to staff and clients of 93 supported-housing programs in California. Exploratory and confirmatory factor analyses were used to identify the items and model structure that best fit the data. Sixteen items were retained in a two-factor model, one related to approach to housing, separation of housing and services, and service philosophy and one related to service array and team structure. Our survey mapped program practices by using a common metric that captured variation in fidelity to Housing First across a large-scale implementation of supported-housing programs.
High-fidelity data embedding for image annotation.
He, Shan; Kirovski, Darko; Wu, Min
2009-02-01
High fidelity is a demanding requirement for data hiding, especially for images with artistic or medical value. This correspondence proposes a high-fidelity image watermarking for annotation with robustness to moderate distortion. To achieve the high fidelity of the embedded image, we introduce a visual perception model that aims at quantifying the local tolerance to noise for arbitrary imagery. Based on this model, we embed two kinds of watermarks: a pilot watermark that indicates the existence of the watermark and an information watermark that conveys a payload of several dozen bits. The objective is to embed 32 bits of metadata into a single image in such a way that it is robust to JPEG compression and cropping. We demonstrate the effectiveness of the visual model and the application of the proposed annotation technology using a database of challenging photographic and medical images that contain a large amount of smooth regions.
GIS Data Based Automatic High-Fidelity 3D Road Network Modeling
NASA Technical Reports Server (NTRS)
Wang, Jie; Shen, Yuzhong
2011-01-01
3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks
ERIC Educational Resources Information Center
Osmundson, Ellen; Herman, Joan; Ringstaff, Cathy; Dai, Yunyun; Timms, Mike
2012-01-01
A central challenge in efficacy studies centers on the issue of "fidelity of implementation," that is, the extent to which participants use the curriculum specified by curriculum developers. In this study, we describe and discuss a "fidelity of implementation" model using multiple methods and instruments to compare two versions of a science…
ERIC Educational Resources Information Center
Kopp, Jason P.; Hulleman, Chris S.; Harackiewicz, Judith M.; Rozek, Chris
2012-01-01
Assessing fidelity of implementation is becoming increasingly important in education research, in particular as a tool for understanding variations in treatment effectiveness. Fidelity of implementation is defined as "the determination of how well an intervention is implemented in comparison with the original program design during an efficacy…
ERIC Educational Resources Information Center
Randall, Camille J.; Biggs, Bridget K.
2008-01-01
Given that the development of treatment fidelity assessment protocol is an integral but too frequently ignored aspect of clinical trials for psychological treatments, the Intensive Mental Health Program (IMHP) sought to build fidelity activities into training, program evaluation, and clinical recordkeeping from the outset of a 3 year study period.…
ERIC Educational Resources Information Center
March, Amanda L.; Castillo, Jose M.; Batsche, George M.; Kincaid, Donald
2016-01-01
The literature on RTI has indicated that professional development and coaching are critical to facilitating problem-solving implementation with fidelity. This study examined the extent to which systems coaching related to the fidelity of problem-solving implementation in 31 schools from six districts. Schools participated in three years of a…
Read, Emma K; Vallevand, Andrea; Farrell, Robin M
2016-01-01
This paper describes the development and evaluation of training intended to enhance students' performance on their first live-animal ovariohysterectomy (OVH). Cognitive task analysis informed a seven-page lab manual, 30-minute video, and 46-item OVH checklist (categorized into nine surgery components and three phases of surgery). We compared two spay simulator models (higher-fidelity silicone versus lower-fidelity cloth and foam). Third-year veterinary students were randomly assigned to a training intervention: lab manual and video only; lab manual, video, and $675 silicone-based model; lab manual, video, and $64 cloth and foam model. We then assessed transfer of training to a live-animal OVH. Chi-square analyses determined statistically significant differences between the interventions on four of nine surgery components, all three phases of surgery, and overall score. Odds ratio analyses indicated that training with a spay model improved the odds of attaining an excellent or good rating on 25 of 46 checklist items, six of nine surgery components, all three phases of surgery, and the overall score. Odds ratio analyses comparing the spay models indicated an advantage for the $675 silicon-based model on only 6 of 46 checklist items, three of nine surgery components, and one phase of surgery. Training with a spay model improved performance when compared to training with a manual and video only. Results suggested that training with a lower-fidelity/cost model might be as effective when compared to a higher-fidelity/cost model. Further research is required to investigate simulator fidelity and costs on transfer of training to the operational environment.
Gilmer, Todd P
2016-06-01
Permanent supportive housing (PSH) programs are being implemented nationally and on a large scale. However, little is known about PSH for transition-age youths (ages 18 to 24). This study estimated health services costs associated with participation in PSH among youths and examined the relationship between fidelity to the Housing First model and health service outcomes. Administrative data were used in a quasi-experimental, difference-in-differences design with a propensity score-matched contemporaneous control group to compare health service costs among 2,609 youths in PSH and 2,609 youths with serious mental illness receiving public mental health services in California from January 1, 2004, through June 30, 2010. Data from a survey of PSH program practices were merged with the administrative data to examine changes in service use among 1,299 youths in 63 PSH programs by level of fidelity to the Housing First model. Total service costs increased by $13,337 among youths in PSH compared with youths in the matched control group. Youths in higher-fidelity programs had larger declines in use of inpatient services and larger increases in outpatient visits compared with youths in lower-fidelity programs. PSH for youths was associated with substantial increases in costs. Higher-fidelity PSH programs may be more effective than lower-fidelity programs in reducing use of inpatient services and increasing use of outpatient services. As substantial investments are made in PSH for youths, it is imperative that these programs are designed and implemented to maximize their effectiveness and their impact on youth outcomes.
Procedural wound geometry and blood flow generation for medical training simulators
NASA Astrophysics Data System (ADS)
Aras, Rifat; Shen, Yuzhong; Li, Jiang
2012-02-01
Efficient application of wound treatment procedures is vital in both emergency room and battle zone scenes. In order to train first responders for such situations, physical casualty simulation kits, which are composed of tens of individual items, are commonly used. Similar to any other training scenarios, computer simulations can be effective means for wound treatment training purposes. For immersive and high fidelity virtual reality applications, realistic 3D models are key components. However, creation of such models is a labor intensive process. In this paper, we propose a procedural wound geometry generation technique that parameterizes key simulation inputs to establish the variability of the training scenarios without the need of labor intensive remodeling of the 3D geometry. The procedural techniques described in this work are entirely handled by the graphics processing unit (GPU) to enable interactive real-time operation of the simulation and to relieve the CPU for other computational tasks. The visible human dataset is processed and used as a volumetric texture for the internal visualization of the wound geometry. To further enhance the fidelity of the simulation, we also employ a surface flow model for blood visualization. This model is realized as a dynamic texture that is composed of a height field and a normal map and animated at each simulation step on the GPU. The procedural wound geometry and the blood flow model are applied to a thigh model and the efficiency of the technique is demonstrated in a virtual surgery scene.
NASA Technical Reports Server (NTRS)
Coats, Sloan; Smerdon, Jason E.; Cook, Benjamin I.; Seager, Richard
2013-01-01
The temporal stationarity of the teleconnection between the tropical Pacific Ocean and North America (NA) is analyzed in atmosphere-only, and coupled last-millennium, historical, and control runs from the Coupled Model Intercomparison Project Phase 5 data archive. The teleconnection, defined as the correlation between December-January-February (DJF) tropical Pacific sea surface temperatures (SSTs) and DJF 200 mb geopotential height, is found to be nonstationary on multidecadal timescales. There are significant changes in the spatial features of the teleconnection over NA in continuous 56-year segments of the last millennium and control simulations. Analysis of atmosphere-only simulations forced with observed SSTs indicates that atmospheric noise cannot account for the temporal variability of the teleconnection, which instead is likely explained by the strength of, and multidecadal changes in, tropical Pacific Ocean variability. These results have implications for teleconnection-based analyses of model fidelity in simulating precipitation, as well as any reconstruction and forecasting efforts that assume stationarity of the observed teleconnection.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, P.; Phani Murali Krishna, R.; Goswami, Bidyut B.; Abhik, S.; Ganai, Malay; Mahakur, M.; Khairoutdinov, Marat; Dudhia, Jimmy
2016-05-01
Inspite of significant improvement in numerical model physics, resolution and numerics, the general circulation models (GCMs) find it difficult to simulate realistic seasonal and intraseasonal variabilities over global tropics and particularly over Indian summer monsoon (ISM) region. The bias is mainly attributed to the improper representation of physical processes. Among all the processes, the cloud and convective processes appear to play a major role in modulating model bias. In recent times, NCEP CFSv2 model is being adopted under Monsoon Mission for dynamical monsoon forecast over Indian region. The analyses of climate free run of CFSv2 in two resolutions namely at T126 and T382, show largely similar bias in simulating seasonal rainfall, in capturing the intraseasonal variability at different scales over the global tropics and also in capturing tropical waves. Thus, the biases of CFSv2 indicate a deficiency in model's parameterization of cloud and convective processes. Keeping this in background and also for the need to improve the model fidelity, two approaches have been adopted. Firstly, in the superparameterization, 32 cloud resolving models each with a horizontal resolution of 4 km are embedded in each GCM (CFSv2) grid and the conventional sub-grid scale convective parameterization is deactivated. This is done to demonstrate the role of resolving cloud processes which otherwise remain unresolved. The superparameterized CFSv2 (SP-CFS) is developed on a coarser version T62. The model is integrated for six and half years in climate free run mode being initialised from 16 May 2008. The analyses reveal that SP-CFS simulates a significantly improved mean state as compared to default CFS. The systematic bias of lesser rainfall over Indian land mass, colder troposphere has substantially been improved. Most importantly the convectively coupled equatorial waves and the eastward propagating MJO has been found to be simulated with more fidelity in SP-CFS. The reason of such betterment in model mean state has been found to be due to the systematic improvement in moisture field, temperature profile and moist instability. The model also has better simulated the cloud and rainfall relation. This initiative demonstrates the role of cloud processes on the mean state of coupled GCM. As the superparameterization approach is computationally expensive, so in another approach, the conventional Simplified Arakawa Schubert (SAS) scheme is replaced by a revised SAS scheme (RSAS) and also the old and simplified cloud scheme of Zhao-Karr (1997) has been replaced by WSM6 in CFSV2 (hereafter CFS-CR). The primary objective of such modifications is to improve the distribution of convective rain in the model by using RSAS and the grid-scale or the large scale nonconvective rain by WSM6. The WSM6 computes the tendency of six class (water vapour, cloud water, ice, snow, graupel, rain water) hydrometeors at each of the model grid and contributes in the low, middle and high cloud fraction. By incorporating WSM6, for the first time in a global climate model, we are able to show a reasonable simulation of cloud ice and cloud liquid water distribution vertically and spatially as compared to Cloudsat observations. The CFS-CR has also showed improvement in simulating annual rainfall cycle and intraseasonal variability over the ISM region. These improvements in CFS-CR are likely to be associated with improvement of the convective and stratiform rainfall distribution in the model. These initiatives clearly address a long standing issue of resolving the cloud processes in climate model and demonstrate that the improved cloud and convective process paramterizations can eventually reduce the systematic bias and improve the model fidelity.
Memory matters: influence from a cognitive map on animal space use.
Gautestad, Arild O
2011-10-21
A vertebrate individual's cognitive map provides a capacity for site fidelity and long-distance returns to favorable patches. Fractal-geometrical analysis of individual space use based on collection of telemetry fixes makes it possible to verify the influence of a cognitive map on the spatial scatter of habitat use and also to what extent space use has been of a scale-specific versus a scale-free kind. This approach rests on a statistical mechanical level of system abstraction, where micro-scale details of behavioral interactions are coarse-grained to macro-scale observables like the fractal dimension of space use. In this manner, the magnitude of the fractal dimension becomes a proxy variable for distinguishing between main classes of habitat exploration and site fidelity, like memory-less (Markovian) Brownian motion and Levy walk and memory-enhanced space use like Multi-scaled Random Walk (MRW). In this paper previous analyses are extended by exploring MRW simulations under three scenarios: (1) central place foraging, (2) behavioral adaptation to resource depletion (avoidance of latest visited locations) and (3) transition from MRW towards Levy walk by narrowing memory capacity to a trailing time window. A generalized statistical-mechanical theory with the power to model cognitive map influence on individual space use will be important for statistical analyses of animal habitat preferences and the mechanics behind site fidelity and home ranges. Copyright © 2011 Elsevier Ltd. All rights reserved.
Measurement fidelity of heart rate variability signal processing: The devil is in the details
Jarrin, Denise C.; McGrath, Jennifer J.; Giovanniello, Sabrina; Poirier, Paul; Lambert, Marie
2017-01-01
Heart rate variability (HRV) is a particularly valuable quantitative marker of the flexibility and balance of the autonomic nervous system. Significant advances in software programs to automatically derive HRV have led to its extensive use in psychophysiological research. However, there is a lack of systematic comparisons across software programs used to derive HRV indices. Further, researchers report meager details on important signal processing decisions making synthesis across studies challenging. The aim of the present study was to evaluate the measurement fidelity of time- and frequency-domain HRV indices derived from three predominant signal processing software programs commonly used in clinical and research settings. Triplicate ECG recordings were derived from 20 participants using identical data acquisition hardware. Among the time-domain indices, there was strong to excellent correspondence (ICCavg =0.93) for SDNN, SDANN, SDNNi, rMSSD, and pNN50. The frequency-domain indices yielded excellent correspondence (ICCavg =0.91) for LF, HF, and LF/HF ratio, except for VLF which exhibited poor correspondence (ICCavg =0.19). Stringent user-decisions and technical specifications for nuanced HRV processing details are essential to ensure measurement fidelity across signal processing software programs. PMID:22820268
Strategies for improving neural signal detection using a neural-electronic interface.
Szlavik, Robert B
2003-03-01
There have been various theoretical and experimental studies presented in the literature that focus on interfacing neurons with discrete electronic devices, such as transistors. From both a theoretical and experimental perspective, these studies have emphasized the variability in the characteristics of the detected action potential from the nerve cell. The demonstrated lack of reproducible fidelity of the nerve cell action potential at the device junction would make it impractical to implement these devices in any neural prosthetic application where reliable detection of the action potential was a prerequisite. In this study, the effects of several different physical parameters on the fidelity of the detected action potential at the device junction are investigated and discussed. The impact of variations in the extracellular resistivity, which directly affects the junction seal resistance, is studied along with the impact of variable nerve cell membrane capacitance and variations in the injected charge. These parameters are discussed in the context of their suitability to design manipulation for the purpose of improving the fidelity of the detected neural action potential. In addition to investigating the effects of variations in these parameters, the applicability of the linear equivalent circuit approach to calculating the junction potential is investigated.
NASA Technical Reports Server (NTRS)
Schlegel, T. T.; Arenare, B.; Greco, E. C.; DePalma, J. L.; Starc, V.; Nunez, T.; Medina, R.; Jugo, D.; Rahman, M.A.; Delgado, R.
2007-01-01
We investigated the accuracy of several conventional and advanced resting ECG parameters for identifying obstructive coronary artery disease (CAD) and cardiomyopathy (CM). Advanced high-fidelity 12-lead ECG tests (approx. 5-min supine) were first performed on a "training set" of 99 individuals: 33 with ischemic or dilated CM and low ejection fraction (EF less than 40%); 33 with catheterization-proven obstructive CAD but normal EF; and 33 age-/gender-matched healthy controls. Multiple conventional and advanced ECG parameters were studied for their individual and combined retrospective accuracies in detecting underlying disease, the advanced parameters falling within the following categories: 1) Signal averaged ECG, including 12-lead high frequency QRS (150-250 Hz) plus multiple filtered and unfiltered parameters from the derived Frank leads; 2) 12-lead P, QRS and T-wave morphology via singular value decomposition (SVD) plus signal averaging; 3) Multichannel (12-lead, derived Frank lead, SVD lead) beat-to-beat QT interval variability; 4) Spatial ventricular gradient (and gradient component) variability; and 5) Heart rate variability. Several multiparameter ECG SuperScores were derivable, using stepwise and then generalized additive logistic modeling, that each had 100% retrospective accuracy in detecting underlying CM or CAD. The performance of these same SuperScores was then prospectively evaluated using a test set of another 120 individuals (40 new individuals in each of the CM, CAD and control groups, respectively). All 12-lead ECG SuperScores retrospectively generated for CM continued to perform well in prospectively identifying CM (i.e., areas under the ROC curve greater than 0.95), with one such score (containing just 4 components) maintaining 100% prospective accuracy. SuperScores retrospectively generated for CAD performed somewhat less accurately, with prospective areas under the ROC curve typically in the 0.90-0.95 range. We conclude that resting 12-lead high-fidelity ECG employing and combining the results of several advanced ECG software techniques shows great promise as a rapid and inexpensive tool for screening of heart disease.
V/STOL propulsion control analysis: Phase 2, task 5-9
NASA Technical Reports Server (NTRS)
1981-01-01
Typical V/STOL propulsion control requirements were derived for transition between vertical and horizontal flight using the General Electric RALS (Remote Augmented Lift System) concept. Steady-state operating requirements were defined for a typical Vertical-to-Horizontal transition and for a typical Horizontal-to-Vertical transition. Control mode requirements were established and multi-variable regulators developed for individual operating conditions. Proportional/Integral gain schedules were developed and were incorporated into a transition controller with capabilities for mode switching and manipulated variable reassignment. A non-linear component-level transient model of the engine was developed and utilized to provide a preliminary check-out of the controller logic. An inlet and nozzle effects model was developed for subsequent incorporation into the engine model and an aircraft model was developed for preliminary flight transition simulations. A condition monitoring development plan was developed and preliminary design requirements established. The Phase 1 long-range technology plan was refined and restructured toward the development of a real-time high fidelity transient model of a supersonic V/STOL propulsion system and controller for use in a piloted simulation program at NASA-Ames.
Summary of the white paper of DICOM WG24 'DICOM in Surgery'
NASA Astrophysics Data System (ADS)
Lemke, Heinz U.
2007-03-01
Standards for creating and integrating information about patients, equipment, and procedures are vitally needed when planning for an efficient Operating Room (OR). The DICOM Working Group 24 (WG24) has been established to develop DICOM objects and services related to Image Guided Surgery (IGS). To determine these standards, it is important to define day-to-day, step-by-step surgical workflow practices and create surgery workflow models per procedures or per variable cases. A well-defined workflow and a high fidelity patient model will be the base of activities for both, radiation therapy and surgery. Considering the present and future requirements for surgical planning and intervention, such a patient model must be n-dimensional, were n may include the spatial and temporal dimensions as well as a number of functional variables. As the boundaries between radiation therapy, surgery and interventional radiology are becoming less well-defined, precise patient models will become the greatest common denominator for all therapeutic disciplines. In addition to imaging, the focus of WG24 should, therefore, also be to serve the therapeutic disciplines by enabling modelling technology to be based on standards.
Site fidelity of the declining amphibian Rana sierrae (Sierra Nevada yellow-legged frog)
Kathleen Matthews; Haiganoush Preisler
2010-01-01
From 1997 to 2006, we used markârecapture models to estimate the site fidelity of 1250 Sierra Nevada yellow-legged frogs (Rana sierrae) in Kings Canyon National Park, California, USA, during their three main activity periods of overwintering, breeding, and feeding. To quantify site fidelity, the tendency to return to and reuse previously occupied...
NASA Technical Reports Server (NTRS)
Mavris, Dimitri; Osburg, Jan
2005-01-01
An important enabler of the new national Vision for Space Exploration is the ability to rapidly and efficiently develop optimized concepts for the manifold future space missions that this effort calls for. The design of such complex systems requires a tight integration of all the engineering disciplines involved, in an environment that fosters interaction and collaboration. The research performed under this grant explored areas where the space systems design process can be enhanced: by integrating risk models into the early stages of the design process, and by including rapid-turnaround variable-fidelity tools for key disciplines. Enabling early assessment of mission risk will allow designers to perform trades between risk and design performance during the initial design space exploration. Entry into planetary atmospheres will require an increased emphasis of the critical disciplines of aero- and thermodynamics. This necessitates the pulling forward of EDL disciplinary expertise into the early stage of the design process. Radiation can have a large potential impact on overall mission designs, in particular for the planned nuclear-powered robotic missions under Project Prometheus and for long-duration manned missions to the Moon, Mars and beyond under Project Constellation. This requires that radiation and associated risk and hazards be assessed and mitigated at the earliest stages of the design process. Hence, RPS is another discipline needed to enhance the engineering competencies of conceptual design teams. Researchers collaborated closely with NASA experts in those disciplines, and in overall space systems design, at Langley Research Center and at the Jet Propulsion Laboratory. This report documents the results of this initial effort.
Finite grid instability and spectral fidelity of the electrostatic Particle-In-Cell algorithm
Huang, C. -K.; Zeng, Y.; Wang, Y.; ...
2016-10-01
The origin of the Finite Grid Instability (FGI) is studied by resolving the dynamics in the 1D electrostatic Particle-In-Cell (PIC) model in the spectral domain at the single particle level and at the collective motion level. The spectral fidelity of the PIC model is contrasted with the underlying physical system or the gridless model. The systematic spectral phase and amplitude errors from the charge deposition and field interpolation are quantified for common particle shapes used in the PIC models. Lastly, it is shown through such analysis and in simulations that the lack of spectral fidelity relative to the physical systemmore » due to the existence of aliased spatial modes is the major cause of the FGI in the PIC model.« less
Finite grid instability and spectral fidelity of the electrostatic Particle-In-Cell algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, C. -K.; Zeng, Y.; Wang, Y.
The origin of the Finite Grid Instability (FGI) is studied by resolving the dynamics in the 1D electrostatic Particle-In-Cell (PIC) model in the spectral domain at the single particle level and at the collective motion level. The spectral fidelity of the PIC model is contrasted with the underlying physical system or the gridless model. The systematic spectral phase and amplitude errors from the charge deposition and field interpolation are quantified for common particle shapes used in the PIC models. Lastly, it is shown through such analysis and in simulations that the lack of spectral fidelity relative to the physical systemmore » due to the existence of aliased spatial modes is the major cause of the FGI in the PIC model.« less
Orbit Stability of OSIRIS-REx in the Vicinity of Bennu Using a High-Fidelity Solar Radiation Model
NASA Technical Reports Server (NTRS)
Williams, Trevor W.; Hughes, Kyle M.; Mashiku, Alinda K.; Longuski, James M.
2015-01-01
Solar radiation pressure is one of the largest perturbing forces on the OSIRISRex trajectory as it orbits the asteroid Bennu. In this work, we investigate how forces due to solar radiation perturb the OSIRIS-REx trajectory in a high-fidelity model. The model accounts for Bennu's non-spherical gravity field, third-body gravity forces from the Sun and Jupiter, as well as solar radiation forces acting on a simplified spacecraft model. Such high-fidelity simulations indicate significant solar radiation pressure perturbations from the nominal orbit. Modifications to the initial design of the nominal orbit are found using a variation of parameters approach that reduce the perturbation in eccentricity by a factor of one-half.
LaChausse, Robert G; Clark, Kim R; Chapple, Sabrina
2014-03-01
To examine how teacher characteristics affected program fidelity in an impact evaluation study of the Positive Prevention PLUS program, and to propose a comprehensive teacher training and professional development structure to increase program fidelity. Curriculum fidelity logs, lesson observations, and teacher surveys were used to measure teacher characteristics and implementation fidelity including adherence, adaptation, and lesson quality. Compared with non-health credentialed teachers, credential health education teachers had greater comfort and self-efficacy regarding sex-related instruction. Teacher self-efficacy and comfort were significant predictors of adherence. Implementation fidelity may be linked to teacher characteristics that can be enhanced during curriculum training. A 2-day teacher training may not adequately address teacher facilitation skills or the maintenance of institutional supports for implementing a program with fidelity and quality. A new model of comprehensive teacher training and support is offered. This new training infrastructure is intended to contribute to the school district's institutionalization of higher-quality comprehensive sexual health education and increase program fidelity. Copyright © 2014 Society for Adolescent Health and Medicine. All rights reserved.
Namiki, Ryo; Koashi, Masato; Imoto, Nobuyuki
2008-09-05
We generalize the experimental success criterion for quantum teleportation (memory) in continuous-variable quantum systems to be suitable for a non-unit-gain condition by considering attenuation (amplification) of the coherent-state amplitude. The new criterion can be used for a nonideal quantum memory and long distance quantum communication as well as quantum devices with amplification process. It is also shown that the framework to measure the average fidelity is capable of detecting all Gaussian channels in the quantum domain.
NASA Astrophysics Data System (ADS)
Fradeneck, Austen; Kimber, Mark
2017-11-01
The present study evaluates the effectiveness of current RANS and LES models in simulating natural convection in high-aspect ratio parallel plate channels. The geometry under consideration is based on a simplification of the coolant and bypass channels in the very high-temperature gas reactor (VHTR). Two thermal conditions are considered, asymmetric and symmetric wall heating with an applied heat flux to match Rayleigh numbers experienced in the VHTR during a loss of flow accident (LOFA). RANS models are compared to analogous high-fidelity LES simulations. Preliminary results demonstrate the efficacy of the low-Reynolds number k- ɛ formulations and their enhancement to the standard form and Reynolds stress transport model in terms of calculating the turbulence production due to buoyancy and overall mean flow variables.
Variable pixel size ionospheric tomography
NASA Astrophysics Data System (ADS)
Zheng, Dunyong; Zheng, Hongwei; Wang, Yanjun; Nie, Wenfeng; Li, Chaokui; Ao, Minsi; Hu, Wusheng; Zhou, Wei
2017-06-01
A novel ionospheric tomography technique based on variable pixel size was developed for the tomographic reconstruction of the ionospheric electron density (IED) distribution. In variable pixel size computerized ionospheric tomography (VPSCIT) model, the IED distribution is parameterized by a decomposition of the lower and upper ionosphere with different pixel sizes. Thus, the lower and upper IED distribution may be very differently determined by the available data. The variable pixel size ionospheric tomography and constant pixel size tomography are similar in most other aspects. There are some differences between two kinds of models with constant and variable pixel size respectively, one is that the segments of GPS signal pay should be assigned to the different kinds of pixel in inversion; the other is smoothness constraint factor need to make the appropriate modified where the pixel change in size. For a real dataset, the variable pixel size method distinguishes different electron density distribution zones better than the constant pixel size method. Furthermore, it can be non-chided that when the effort is spent to identify the regions in a model with best data coverage. The variable pixel size method can not only greatly improve the efficiency of inversion, but also produce IED images with high fidelity which are the same as a used uniform pixel size method. In addition, variable pixel size tomography can reduce the underdetermined problem in an ill-posed inverse problem when the data coverage is irregular or less by adjusting quantitative proportion of pixels with different sizes. In comparison with constant pixel size tomography models, the variable pixel size ionospheric tomography technique achieved relatively good results in a numerical simulation. A careful validation of the reliability and superiority of variable pixel size ionospheric tomography was performed. Finally, according to the results of the statistical analysis and quantitative comparison, the proposed method offers an improvement of 8% compared with conventional constant pixel size tomography models in the forward modeling.
Uncertainty propagation of p-boxes using sparse polynomial chaos expansions
NASA Astrophysics Data System (ADS)
Schöbi, Roland; Sudret, Bruno
2017-06-01
In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.
Uncertainty propagation of p-boxes using sparse polynomial chaos expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch
2017-06-15
In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less
A Supervisor-Targeted Implementation Approach to Promote System Change: The R3 Model.
Saldana, Lisa; Chamberlain, Patricia; Chapman, Jason
2016-11-01
Opportunities to evaluate strategies to create system-wide change in the child welfare system (CWS) and the resulting public health impact are rare. Leveraging a real-world, system-initiated effort to infuse the use of evidence-based principles throughout a CWS workforce, a pilot of the R 3 model and supervisor-targeted implementation approach is described. The development of R 3 and its associated fidelity monitoring was a collaboration between the CWS and model developers. Outcomes demonstrate implementation feasibility, strong fidelity scale measurement properties, improved supervisor fidelity over time, and the acceptability and perception of positive change by agency leadership. The value of system-initiated collaborations is discussed.
Bao, Yuhua; McGuire, Thomas G; Chan, Ya-Fen; Eggman, Ashley A; Ryan, Andrew M; Bruce, Martha L; Pincus, Harold Alan; Hafer, Erin; Unützer, Jürgen
2017-01-01
To assess the role of value-based payment (VBP) in improving fidelity and patient outcomes in community implementation of an evidence-based mental health intervention, the Collaborative Care Model (CCM). Retrospective study based on a natural experiment. We used the clinical tracking data of 1806 adult patients enrolled in a large implementation of the CCM in community health clinics in Washington state. VBP was initiated in year 2 of the program, creating a natural experiment. We compared implementation fidelity (measured by 3 process-of-care elements of the CCM) between patient-months exposed to VBP and patient-months not exposed to VBP. A series of regressions were estimated to check robustness of findings. We estimated a Cox proportional hazard model to assess the effect of VBP on time to achieving clinically significant improvement in depression (measured based on changes in depression symptom scores over time). Estimated marginal effects of VBP on fidelity ranged from 9% to 30% of the level of fidelity had there been no exposure to VBP (P <.05 for every fidelity measure). Improvement in fidelity in response to VBP was greater among providers with a larger patient panel and among providers with a lower level of fidelity at baseline. Exposure to VBP was associated with an adjusted hazard ratio of 1.45 (95% confidence interval, 1.04-2.03) for achieving clinically significant improvement in depression. VBP improved fidelity to key elements of the CCM, both directly incentivized and not explicitly incentivized by the VBP, and improved patient depression outcomes.
The role of treatment fidelity on outcomes during a randomized field trial of an autism intervention
Mandell, David S; Stahmer, Aubyn C; Shin, Sujie; Xie, Ming; Reisinger, Erica; Marcus, Steven C
2013-01-01
This randomized field trial comparing Strategies for Teaching based on Autism Research and Structured Teaching enrolled educators in 33 kindergarten-through-second-grade autism support classrooms and 119 students, aged 5–8 years in the School District of Philadelphia. Students were assessed at the beginning and end of the academic year using the Differential Ability Scales. Program fidelity was measured through video coding and use of a checklist. Outcomes were assessed using linear regression with random effects for classroom and student. Average fidelity was 57% in Strategies for Teaching based on Autism Research classrooms and 48% in Structured Teaching classrooms. There was a 9.2-point (standard deviation = 9.6) increase in Differential Ability Scales score over the 8-month study period, but no main effect of program. There was a significant interaction between fidelity and group. In classrooms with either low or high program fidelity, students in Strategies for Teaching based on Autism Research experienced a greater gain in Differential Ability Scales score than students in Structured Teaching (11.2 vs 5.5 points and 11.3 vs 8.9 points, respectively). In classrooms with moderate fidelity, students in Structured Teaching experienced a greater gain than students in Strategies for Teaching based on Autism Research (10.1 vs 4.4 points). The results suggest significant variability in implementation of evidence-based practices, even with supports, and also suggest the need to address challenging issues related to implementation measurement in community settings. PMID:23592849
NASA Astrophysics Data System (ADS)
Langenbrunner, B.; Neelin, J.; Meyerson, J.
2011-12-01
The accurate representation of precipitation is a recurring issue in global climate models, especially in the tropics. Poor skill in modeling the variability and climate teleconnections associated with El Niño/Southern Oscillation (ENSO) also persisted in the latest Climate Model Intercomparison Project (CMIP) campaigns. Observed ENSO precipitation teleconnections provide a standard by which we can judge a given model's ability to reproduce precipitation and dynamic feedback processes originating in the tropical Pacific. Using CMIP3 Atmospheric Model Intercomparison Project (AMIP) runs as a baseline, we compare precipitation teleconnections between models and observations, and we evaluate these results against available CMIP5 historical and AMIP runs. Using AMIP simulations restricts evaluation to the atmospheric response, as sea surface temperatures (SSTs) in AMIP are prescribed by observations. We use a rank correlation between ENSO SST indices and precipitation to define teleconnections, since this method is robust to outliers and appropriate for non-Gaussian data. Spatial correlations of the modeled and observed teleconnections are then evaluated. We look at these correlations in regions of strong precipitation teleconnections, including equatorial S. America, the "horseshoe" region in the western tropical Pacific, and southern N. America. For each region and season, we create a "normalized projection" of a given model's teleconnection pattern onto that of the observations, a metric that assesses the quality of regional pattern simulations while rewarding signals of correct sign over the region. Comparing this to an area-averaged (i.e., more generous) metric suggests models do better when restrictions on exact spatial dependence are loosened and conservation constraints apply. Model fidelity in regional measures remains far from perfect, suggesting intrinsic issues with the models' regional sensitivities in moist processes.
Ground-state fidelity and bipartite entanglement in the Bose-Hubbard model.
Buonsante, P; Vezzani, A
2007-03-16
We analyze the quantum phase transition in the Bose-Hubbard model borrowing two tools from quantum-information theory, i.e., the ground-state fidelity and entanglement measures. We consider systems at unitary filling comprising up to 50 sites and show for the first time that a finite-size scaling analysis of these quantities provides excellent estimates for the quantum critical point. We conclude that fidelity is particularly suited for revealing a quantum phase transition and pinning down the critical point thereof, while the success of entanglement measures depends on the mechanisms governing the transition.
NASA Astrophysics Data System (ADS)
Hasson, Shabeh ul; Böhner, Jürgen; Chishtie, Farrukh
2018-03-01
Assessment of future water availability from the Himalayan watersheds of Indus Basin (Jhelum, Kabul and upper Indus basin—UIB) is a growing concern for safeguarding the sustainable socioeconomic wellbeing downstream. This requires, before all, robust climate change information from the present-day state-of-the-art climate models. However, the robustness of climate change projections highly depends upon the fidelity of climate modeling experiments. Hence, this study assesses the fidelity of seven dynamically refined (0.44° ) experiments, performed under the framework of the coordinated regional climate downscaling experiment for South Asia (CX-SA), and additionally, their six coarse-resolution driving datasets participating in the coupled model intercomparison project phase 5 (CMIP5). We assess fidelity in terms of reproducibility of the observed climatology of temperature and precipitation, and the seasonality of the latter for the historical period (1971-2005). Based on the model fidelity results, we further assess the robustness or uncertainty of the far future climate (2061-2095), as projected under the extreme-end warming scenario of the representative concentration pathway (RCP) 8.5. Our results show that the CX-SA and their driving CMIP5 experiments consistently feature low fidelity in terms of the chosen skill metrics, suggesting substantial cold (6-10 ° C) and wet (up to 80%) biases and underestimation of observed precipitation seasonality. Surprisingly, the CX-SA are unable to outperform their driving datasets. Further, the biases of CX-SA and of their driving CMIP5 datasets are higher in magnitude than their projected changes under RCP8.5—and hence under less extreme RCPs—by the end of 21st century, indicating uncertain future climates for the Indus Basin watersheds. Higher inter-dataset disagreements of both CMIP5 and CX-SA for their simulated historical precipitation and for its projected changes reinforce uncertain future wet/dry conditions whereas the CMIP5 projected warming is less robust owing to higher historical period uncertainty. Interestingly, a better agreement among those CX-SA experiments that have been obtained through downscaling different CMIP5 experiments with the same regional climate model (RCM) indicates the RCMs' ability of modulating the influence of lateral boundary conditions over a large domain. These findings, instead of suggesting the usual skill-based identification of 'reasonable' global or regional low fidelity experiments, rather emphasize on a paradigm shift towards improving their fidelity by exploiting the potential of meso-to-local scale climate models—preferably of those that can solely resolve global-to-local scale climatic processes—in terms of microphysics, resolution and explicitly resolved convections. Additionally, an extensive monitoring of the nival regime within the Himalayan watersheds will reduce the observational uncertainty, allowing for a more robust fidelity assessment of the climate modeling experiments.
ERIC Educational Resources Information Center
Mincic, Melissa; Smith, Barbara J.; Strain, Phil
2009-01-01
Implementing the Pyramid Model with fidelity and achieving positive outcomes for children and their families requires that administrators understand their roles in the implementation process. Every administrative decision impacts program quality and sustainability. This Policy Brief underscores the importance of facilitative administrative…
Framework for Multidisciplinary Analysis, Design, and Optimization with High-Fidelity Analysis Tools
NASA Technical Reports Server (NTRS)
Orr, Stanley A.; Narducci, Robert P.
2009-01-01
A plan is presented for the development of a high fidelity multidisciplinary optimization process for rotorcraft. The plan formulates individual disciplinary design problems, identifies practical high-fidelity tools and processes that can be incorporated in an automated optimization environment, and establishes statements of the multidisciplinary design problem including objectives, constraints, design variables, and cross-disciplinary dependencies. Five key disciplinary areas are selected in the development plan. These are rotor aerodynamics, rotor structures and dynamics, fuselage aerodynamics, fuselage structures, and propulsion / drive system. Flying qualities and noise are included as ancillary areas. Consistency across engineering disciplines is maintained with a central geometry engine that supports all multidisciplinary analysis. The multidisciplinary optimization process targets the preliminary design cycle where gross elements of the helicopter have been defined. These might include number of rotors and rotor configuration (tandem, coaxial, etc.). It is at this stage that sufficient configuration information is defined to perform high-fidelity analysis. At the same time there is enough design freedom to influence a design. The rotorcraft multidisciplinary optimization tool is built and substantiated throughout its development cycle in a staged approach by incorporating disciplines sequentially.
Quasi steady-state aerodynamic model development for race vehicle simulations
NASA Astrophysics Data System (ADS)
Mohrfeld-Halterman, J. A.; Uddin, M.
2016-01-01
Presented in this paper is a procedure to develop a high fidelity quasi steady-state aerodynamic model for use in race car vehicle dynamic simulations. Developed to fit quasi steady-state wind tunnel data, the aerodynamic model is regressed against three independent variables: front ground clearance, rear ride height, and yaw angle. An initial dual range model is presented and then further refined to reduce the model complexity while maintaining a high level of predictive accuracy. The model complexity reduction decreases the required amount of wind tunnel data thereby reducing wind tunnel testing time and cost. The quasi steady-state aerodynamic model for the pitch moment degree of freedom is systematically developed in this paper. This same procedure can be extended to the other five aerodynamic degrees of freedom to develop a complete six degree of freedom quasi steady-state aerodynamic model for any vehicle.
Multi-fidelity Gaussian process regression for prediction of random fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.
We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less
An empirical study of multidimensional fidelity of COMPASS consultation.
Wong, Venus; Ruble, Lisa A; McGrew, John H; Yu, Yue
2018-06-01
Consultation is essential to the daily practice of school psychologists (National Association of School Psychologist, 2010). Successful consultation requires fidelity at both the consultant (implementation) and consultee (intervention) levels. We applied a multidimensional, multilevel conception of fidelity (Dunst, Trivette, & Raab, 2013) to a consultative intervention called the Collaborative Model for Promoting Competence and Success (COMPASS) for students with autism. The study provided 3 main findings. First, multidimensional, multilevel fidelity is a stable construct and increases over time with consultation support. Second, mediation analyses revealed that implementation-level fidelity components had distant, indirect effects on student Individualized Education Program (IEP) outcomes. Third, 3 fidelity components correlated with IEP outcomes: teacher coaching responsiveness at the implementation level, and teacher quality of delivery and student responsiveness at the intervention levels. Implications and future directions are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model
NASA Technical Reports Server (NTRS)
Rizvi, Farheen
2016-01-01
Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.
Bhargava, Ayush; Bertrand, Jeffrey W; Gramopadhye, Anand K; Madathil, Kapil C; Babu, Sabarish V
2018-04-01
With costs of head-mounted displays (HMDs) and tracking technology decreasing rapidly, various virtual reality applications are being widely adopted for education and training. Hardware advancements have enabled replication of real-world interactions in virtual environments to a large extent, paving the way for commercial grade applications that provide a safe and risk-free training environment at a fraction of the cost. But this also mandates the need to develop more intrinsic interaction techniques and to empirically evaluate them in a more comprehensive manner. Although there exists a body of previous research that examines the benefits of selected levels of interaction fidelity on performance, few studies have investigated the constituent components of fidelity in a Interaction Fidelity Continuum (IFC) with several system instances and their respective effects on performance and learning in the context of a real-world skills training application. Our work describes a large between-subjects investigation conducted over several years that utilizes bimanual interaction metaphors at six discrete levels of interaction fidelity to teach basic precision metrology concepts in a near-field spatial interaction task in VR. A combined analysis performed on the data compares and contrasts the six different conditions and their overall effects on performance and learning outcomes, eliciting patterns in the results between the discrete application points on the IFC. With respect to some performance variables, results indicate that simpler restrictive interaction metaphors and highest fidelity metaphors perform better than medium fidelity interaction metaphors. In light of these results, a set of general guidelines are created for developers of spatial interaction metaphors in immersive virtual environments for precise fine-motor skills training simulations.
Requirements for Large Eddy Simulation Computations of Variable-Speed Power Turbine Flows
NASA Technical Reports Server (NTRS)
Ameri, Ali A.
2016-01-01
Variable-speed power turbines (VSPTs) operate at low Reynolds numbers and with a wide range of incidence angles. Transition, separation, and the relevant physics leading to them are important to VSPT flow. Higher fidelity tools such as large eddy simulation (LES) may be needed to resolve the flow features necessary for accurate predictive capability and design of such turbines. A survey conducted for this report explores the requirements for such computations. The survey is limited to the simulation of two-dimensional flow cases and endwalls are not included. It suggests that a grid resolution necessary for this type of simulation to accurately represent the physics may be of the order of Delta(x)+=45, Delta(x)+ =2 and Delta(z)+=17. Various subgrid-scale (SGS) models have been used and except for the Smagorinsky model, all seem to perform well and in some instances the simulations worked well without SGS modeling. A method of specifying the inlet conditions such as synthetic eddy modeling (SEM) is necessary to correctly represent the inlet conditions.
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
NASA Astrophysics Data System (ADS)
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of fitness-driven retention. This strategy capitalizes on the advantages of evolutionary algorithm as well as POD-based reduced order modeling, while overcoming the shortcomings inherent with these techniques. When linked with M3 DOE, this strategy offers a computationally efficient methodology for problems with high level of complexity and a challenging design-space. This newly developed framework is demonstrated for its robustness on a nonconventional supersonic tailless air vehicle wing shape optimization problem.
Crystal plasticity modeling of β phase deformation in Ti-6Al-4V
NASA Astrophysics Data System (ADS)
Moore, John A.; Barton, Nathan R.; Florando, Jeff; Mulay, Rupalee; Kumar, Mukul
2017-10-01
Ti-6Al-4V is an alloy of titanium that dominates titanium usage in applications ranging from mass-produced consumer goods to high-end aerospace parts. The material’s structure on a microscale is known to affect its mechanical properties but these effects are not fully understood. Specifically, this work will address the effects of low volume fraction intergranular β phase on Ti-6Al-4V’s mechanical response during the transition from elastic to plastic deformation. A crystal plasticity-based finite element model is used to fully resolve the deformation of the β phase for the first time. This high fidelity model captures mechanisms difficult to access via experiments or lower fidelity models. The results are used to assess lower fidelity modeling assumptions and identify phenomena that have ramifications for failure of the material.
A New Design for Airway Management Training with Mixed Reality and High Fidelity Modeling.
Shen, Yunhe; Hananel, David; Zhao, Zichen; Burke, Daniel; Ballas, Crist; Norfleet, Jack; Reihsen, Troy; Sweet, Robert
2016-01-01
Restoring airway function is a vital task in many medical scenarios. Although various simulation tools have been available for learning such skills, recent research indicated that fidelity in simulating airway management deserves further improvements. In this study, we designed and implemented a new prototype for practicing relevant tasks including laryngoscopy, intubation and cricothyrotomy. A large amount of anatomical details or landmarks were meticulously selected and reconstructed from medical scans, and 3D-printed or molded to the airway intervention model. This training model was augmented by virtually and physically presented interactive modules, which are interoperable with motion tracking and sensor data feedback. Implementation results showed that this design is a feasible approach to develop higher fidelity airway models that can be integrated with mixed reality interfaces.
Sources of variation in breeding-ground fidelity of mallards (Anas platyrhynchos)
Doherty, P.F.; Nichols, J.D.; Tautin, J.; Voelzer, J.E.; Smith, G.W.; Benning, D.S.; Bentley, V.R.; Bidwell, J.K.; Bollinger, K.S.; Brazda, A.R.; Buelna, E.K.; Goldsberry, J.R.; King, R.J.; Roetker, F.H.; Solberg, J.W.; Thorpe, P.P.; Wortham, J.S.
2002-01-01
Generalizations used to support hypotheses about the evolution of fidelity to breeding areas in birds include the tendency for fidelity to be greater in adult birds than in yearlings. In ducks, in contrast to most bird species, fidelity is thought to be greater among females than males. Researchers have suggested that fidelity in ducks is positively correlated with pond availability. However, most estimates of fidelity on which these inferences have been based represent functions of survival and recapture-resighting probabilities in addition to fidelity. We applied the modeling approach developed by Burnham to recapture and band recovery data of mallard ducks to test the above hypotheses about fidelity. We found little evidence of sex differences in adult philopatry, with females being slightly more philopatric than males in one study area, but not in a second study area. However, yearling females were more philopatric than yearling males in both study areas. We found that adults were generally more philopatric than yearlings. We could find no relationship between fidelity and pond availability. Our results, while partially supporting current theory concerning sex and age differences in philopatry, suggest that adult male mallards are more philopatric than once thought, and we recommend that other generalizations about philopatry be revisited with proper estimation techniques.
Modeling of Passive Acoustic Liners from High Fidelity Numerical Simulations
NASA Astrophysics Data System (ADS)
Ferrari, Marcello do Areal Souto
Noise reduction in aviation has been an important focus of study in the last few decades. One common solution is setting up acoustic liners in the internal walls of the engines. However, measurements in the laboratory with liners are expensive and time consuming. The present work proposes a nonlinear physics-based time domain model to predict the acoustic behavior of a given liner in a defined flow condition. The parameters of the model are defined by analysis of accurate numerical solutions of the flow obtained from a high-fidelity numerical code. The length of the cavity is taken into account by using an analytical procedure to account for internal reflections in the interior of the cavity. Vortices and jets originated from internal flow separations are confirmed to be important mechanisms of sound absorption, which defines the overall efficiency of the liner. Numerical simulations at different frequency, geometry and sound pressure level are studied in detail to define the model parameters. Comparisons with high-fidelity numerical simulations show that the proposed model is accurate, robust, and can be used to define a boundary condition simulating a liner in a high-fidelity code.
CTF (Subchannel) Calculations and Validation L3:VVI.H2L.P15.01
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, Natalie
The goal of the Verification and Validation Implementation (VVI) High to Low (Hi2Lo) process is utilizing a validated model in a high resolution code to generate synthetic data for improvement of the same model in a lower resolution code. This process is useful in circumstances where experimental data does not exist or it is not sufficient in quantity or resolution. Data from the high-fidelity code is treated as calibration data (with appropriate uncertainties and error bounds) which can be used to train parameters that affect solution accuracy in the lower-fidelity code model, thereby reducing uncertainty. This milestone presents a demonstrationmore » of the Hi2Lo process derived in the VVI focus area. The majority of the work performed herein describes the steps of the low-fidelity code used in the process with references to the work detailed in the companion high-fidelity code milestone (Reference 1). The CASL low-fidelity code used to perform this work was Cobra Thermal Fluid (CTF) and the high-fidelity code was STAR-CCM+ (STAR). The master branch version of CTF (pulled May 5, 2017 – Reference 2) was utilized for all CTF analyses performed as part of this milestone. The statistical and VVUQ components of the Hi2Lo framework were performed using Dakota version 6.6 (release date May 15, 2017 – Reference 3). Experimental data from Westinghouse Electric Company (WEC – Reference 4) was used throughout the demonstrated process to compare with the high-fidelity STAR results. A CTF parameter called Beta was chosen as the calibration parameter for this work. By default, Beta is defined as a constant mixing coefficient in CTF and is essentially a tuning parameter for mixing between subchannels. Since CTF does not have turbulence models like STAR, Beta is the parameter that performs the most similar function to the turbulence models in STAR. The purpose of the work performed in this milestone is to tune Beta to an optimal value that brings the CTF results closer to those measured in the WEC experiments.« less
NASA Astrophysics Data System (ADS)
Dai, Yan-Wei; Hu, Bing-Quan; Zhao, Jian-Hui; Zhou, Huan-Qiang
2010-09-01
The ground-state fidelity per lattice site is computed for the quantum three-state Potts model in a transverse magnetic field on an infinite-size lattice in one spatial dimension in terms of the infinite matrix product state algorithm. It is found that, on the one hand, a pinch point is identified on the fidelity surface around the critical point, and on the other hand, the ground-state fidelity per lattice site exhibits bifurcations at pseudo critical points for different values of the truncation dimension, which in turn approach the critical point as the truncation dimension becomes large. This implies that the ground-state fidelity per lattice site enables us to capture spontaneous symmetry breaking when the control parameter crosses the critical value. In addition, a finite-entanglement scaling of the von Neumann entropy is performed with respect to the truncation dimension, resulting in a precise determination of the central charge at the critical point. Finally, we compute the transverse magnetization, from which the critical exponent β is extracted from the numerical data.
NASA Technical Reports Server (NTRS)
Collatz, G. James; Kawa, R.
2007-01-01
Progress in better determining CO2 sources and sinks will almost certainly rely on utilization of more extensive and intensive CO2 and related observations including those from satellite remote sensing. Use of advanced data requires improved modeling and analysis capability. Under NASA Carbon Cycle Science support we seek to develop and integrate improved formulations for 1) atmospheric transport, 2) terrestrial uptake and release, 3) biomass and 4) fossil fuel burning, and 5) observational data analysis including inverse calculations. The transport modeling is based on meteorological data assimilation analysis from the Goddard Modeling and Assimilation Office. Use of assimilated met data enables model comparison to CO2 and other observations across a wide range of scales of variability. In this presentation we focus on the short end of the temporal variability spectrum: hourly to synoptic to seasonal. Using CO2 fluxes at varying temporal resolution from the SIB 2 and CASA biosphere models, we examine the model's ability to simulate CO2 variability in comparison to observations at different times, locations, and altitudes. We find that the model can resolve much of the variability in the observations, although there are limits imposed by vertical resolution of boundary layer processes. The influence of key process representations is inferred. The high degree of fidelity in these simulations leads us to anticipate incorporation of realtime, highly resolved observations into a multiscale carbon cycle analysis system that will begin to bridge the gap between top-down and bottom-up flux estimation, which is a primary focus of NACP.
Teacher Fidelity to a Physical Education Curricular Model and Physical Activity Outcomes
ERIC Educational Resources Information Center
Stylianou, Michalis; Kloeppel, Tiffany; Kulinna, Pamela; van der Mars, Han
2016-01-01
Background: This study was informed by the bodies of literature emphasizing the role of physical education in promoting physical activity (PA) and addressing teacher fidelity to curricular models. Purpose: The purpose of this study was to compare student PA levels, lesson context, and teacher PA promotion behavior among classes where teachers were…
Injury representation against ballistic threats using three novel numerical models.
Breeze, Johno; Fryer, R; Pope, D; Clasper, J
2017-06-01
Injury modelling of ballistic threats is a valuable tool for informing policy on personal protective equipment and other injury mitigation methods. Currently, the Ministry of Defence (MoD) and Centre for Protection of National Infrastructure (CPNI) are focusing on the development of three interlinking numerical models, each of a different fidelity, to answer specific questions on current threats. High-fidelity models simulate the physical events most realistically, and will be used in the future to test the medical effectiveness of personal armour systems. They are however generally computationally intensive, slow running and much of the experimental data to base their algorithms on do not yet exist. Medium fidelity models, such as the personnel vulnerability simulation (PVS), generally use algorithms based on physical or engineering estimations of interaction. This enables a reasonable representation of reality and greatly speeds up runtime allowing full assessments of the entire body area to be undertaken. Low-fidelity models such as the human injury predictor (HIP) tool generally use simplistic algorithms to make injury predictions. Individual scenarios can be run very quickly and hence enable statistical casualty assessments of large groups, where significant uncertainty concerning the threat and affected population exist. HIP is used to simulate the blast and penetrative fragmentation effects of a terrorist detonation of an improvised explosive device within crowds of people in metropolitan environments. This paper describes the collaboration between MoD and CPNI using an example of all three fidelities of injury model and to highlight future areas of research that are required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
A modular approach to large-scale design optimization of aerospace systems
NASA Astrophysics Data System (ADS)
Hwang, John T.
Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft components, providing differentiability. An unstructured quadrilateral mesh generation algorithm is also developed to automate the creation of detailed meshes for aircraft structures, and a mesh convergence study is performed to verify that the quality of the mesh is maintained as it is refined. As a demonstration, high-fidelity aerostructural analysis is performed for two unconventional configurations with detailed structures included, and aerodynamic shape optimization is applied to the truss-braced wing, which finds and eliminates a shock in the region bounded by the struts and the wing.
Hoekstra, Femke; van Offenbeek, Marjolein A G; Dekker, Rienk; Hettinga, Florentina J; Hoekstra, Trynke; van der Woude, Lucas H V; van der Schans, Cees P
2017-12-01
Although the importance of evaluating implementation fidelity is acknowledged, little is known about heterogeneity in fidelity over time. This study aims to generate insight into the heterogeneity in implementation fidelity trajectories of a health promotion program in multidisciplinary settings and the relationship with changes in patients' health behavior. This study used longitudinal data from the nationwide implementation of an evidence-informed physical activity promotion program in Dutch rehabilitation care. Fidelity scores were calculated based on annual surveys filled in by involved professionals (n = ± 70). Higher fidelity scores indicate a more complete implementation of the program's core components. A hierarchical cluster analysis was conducted on the implementation fidelity scores of 17 organizations at three different time points. Quantitative and qualitative data were used to explore organizational and professional differences between identified trajectories. Regression analyses were conducted to determine differences in patient outcomes. Three trajectories were identified as the following: 'stable high fidelity' (n = 9), 'moderate and improving fidelity' (n = 6), and 'unstable fidelity' (n = 2). The stable high fidelity organizations were generally smaller, started earlier, and implemented the program in a more structured way compared to moderate and improving fidelity organizations. At the implementation period's start and end, support from physicians and physiotherapists, professionals' appreciation, and program compatibility were rated more positively by professionals working in stable high fidelity organizations as compared to the moderate and improving fidelity organizations (p < .05). Qualitative data showed that the stable high fidelity organizations had often an explicit vision and strategy about the implementation of the program. Intriguingly, the trajectories were not associated with patients' self-reported physical activity outcomes (adjusted model β = - 651.6, t(613) = - 1032, p = .303). Differences in organizational-level implementation fidelity trajectories did not result in outcome differences at patient-level. This suggests that an effective implementation fidelity trajectory is contingent on the local organization's conditions. More specifically, achieving stable high implementation fidelity required the management of tensions: realizing a localized change vision, while safeguarding the program's standardized core components and engaging the scarce physicians throughout the process. When scaling up evidence-informed health promotion programs, we propose to tailor the management of implementation tensions to local organizations' starting position, size, and circumstances. The Netherlands National Trial Register NTR3961 . Registered 18 April 2013.
Role of the North Atlantic Ocean in Low Frequency Climate Variability
NASA Astrophysics Data System (ADS)
Danabasoglu, G.; Yeager, S. G.; Kim, W. M.; Castruccio, F. S.
2017-12-01
The Atlantic Ocean is a unique basin with its extensive, North - South overturning circulation, referred to as the Atlantic meridional overturning circulation (AMOC). AMOC is thought to represent the dynamical memory of the climate system, playing an important role in decadal and longer time scale climate variability as well as prediction of the earth's future climate on these time scales via its large heat and salt transports. This oceanic memory is communicated to the atmosphere primarily through the influence of persistent sea surface temperature (SST) variations. Indeed, many modeling studies suggest that ocean circulation, i.e., AMOC, is largely responsible for the creation of coherent SST variability in the North Atlantic, referred to as Atlantic Multidecadal Variability (AMV). AMV has been linked to many (multi)decadal climate variations in, e.g., Sahel and Brazilian rainfall, Atlantic hurricane activity, and Arctic sea-ice extent. In the absence of long, continuous observations, much of the evidence for the ocean's role in (multi)decadal variability comes from model simulations. Although models tend to agree on the role of the North Atlantic Oscillation in creating the density anomalies that proceed the changes in ocean circulation, model fidelity in representing variability characteristics, mechanisms, and air-sea interactions remains a serious concern. In particular, there is increasing evidence that models significantly underestimate low frequency variability in the North Atlantic compared to available observations. Such model deficiencies can amplify the relative influence of external or stochastic atmospheric forcing in generating (multi)decadal variability, i.e., AMV, at the expense of ocean dynamics. Here, a succinct overview of the current understanding of the (North) Atlantic Ocean's role on the regional and global climate, including some outstanding questions, will be presented. In addition, a few examples of the climate impacts of the AMV via atmospheric teleconnections from a set of coupled simulations, also considering the relative roles of its tropical and extratropical components, will be highlighted.
Crystal plasticity modeling of β phase deformation in Ti-6Al-4V
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, John A.; Barton, Nathan R.; Florando, Jeff
Ti-6Al-4V is an alloy of titanium that dominates titanium usage in applications ranging from mass-produced consumer goods to high-end aerospace parts. The material's structure on a microscale is known to affect its mechanical properties but these effects are not fully understood. Specifically, this work will address the effects of low volume fraction intergranular β phase on Ti-6Al-4V's mechanical response during the transition from elastic to plastic deformation. A crystal plasticity-based finite element model is used to fully resolve the deformation of the β phase for the first time. This high fidelity model captures mechanisms difficult to access via experiments ormore » lower fidelity models. Lastly, the results are used to assess lower fidelity modeling assumptions and identify phenomena that have ramifications for failure of the material.« less
Crystal plasticity modeling of β phase deformation in Ti-6Al-4V
Moore, John A.; Barton, Nathan R.; Florando, Jeff; ...
2017-08-24
Ti-6Al-4V is an alloy of titanium that dominates titanium usage in applications ranging from mass-produced consumer goods to high-end aerospace parts. The material's structure on a microscale is known to affect its mechanical properties but these effects are not fully understood. Specifically, this work will address the effects of low volume fraction intergranular β phase on Ti-6Al-4V's mechanical response during the transition from elastic to plastic deformation. A crystal plasticity-based finite element model is used to fully resolve the deformation of the β phase for the first time. This high fidelity model captures mechanisms difficult to access via experiments ormore » lower fidelity models. Lastly, the results are used to assess lower fidelity modeling assumptions and identify phenomena that have ramifications for failure of the material.« less
Hauber, Roxanne P; Cormier, Eileen; Whyte, James
2010-01-01
Increasingly, high-fidelity patient simulation (HFPS) is becoming essential to nursing education. Much remains unknown about how classroom learning is connected to student decision-making in simulation scenarios and the degree to which transference takes place between the classroom setting and actual practice. The present study was part of a larger pilot study aimed at determining the relationship between nursing students' clinical ability to prioritize their actions and the associated cognitions and physiologic outcomes of care using HFPS. In an effort to better explain the knowledge base being used by nursing students in HFPS, the investigators explored the relationship between common measures of knowledge and performance-related variables. Findings are discussed within the context of the expert performance approach and concepts from cognitive psychology, such as cognitive architecture, cognitive load, memory, and transference.
Multifamily Group Psychoeducation in New York State: Implementation and Fidelity Outcomes.
Kealey, Edith M; Leckman-Westin, Emily; Jewell, Thomas C; Finnerty, Molly T
2015-11-01
The study examined implementation outcomes from a large state initiative to support dissemination of multifamily group (MFG) psychoeducation in outpatient mental health settings. Thirty-one sites participated in the project. Baseline training in the MFG model was followed by monthly expert consultation delivered in either a group (16 sites) or individual format (15 sites). Research staff assessed fidelity to the MFG model by telephone at baseline and 12, 18, and 24 months and documented time to completion of three key milestones: holding a family joining session, a family educational workshop, and an MFG meeting. Intent-to-train analyses found that 12 sites (39%) achieved high fidelity to the MFG model, and 20 (65%) achieved moderate or high fidelity. Mean scores on the Family Psychoeducation Fidelity Assessment Scale increased over time. Twenty-five sites (81%) conducted at least one joining session, and 20 (65%) conducted at least one MFG. Mean±SD time from baseline to the first group was 11.75±4.78 months. Programs that held the first joining session within four to 12 months after training were significantly more likely than programs that did not to conduct a group (p<.05). No significant differences were found by consultation format. Implementation of moderate- to high-fidelity MFG programs in routine outpatient mental health settings is feasible. Sites that moved very quickly or very slowly in early implementation stages were less likely to be successful in conducting an MFG. More research on the efficiency and effectiveness of consultation formats is needed to guide future implementation efforts.
EBT Fidelity Trajectories Across Training Cohorts Using the Interagency Collaborative Team Strategy
Hecht, Debra; Aarons, Greg; Fettes, Danielle; Hurlburt, Michael; Ledesma, Karla
2015-01-01
The Interdisciplinary Collaborative Team (ICT) strategy uses front-line providers as adaptation, training and quality control agents for multi-agency EBT implementation. This study tests whether an ICT transmits fidelity to subsequent provider cohorts. SafeCare was implemented by home visitors from multiple community-based agencies contracting with child welfare. Client-reported fidelity trajectories for 5,769 visits, 957 clients and 45 providers were compared using three-level growth models. Provider cohorts trained and live-coached by the ICT attained benchmark fidelity after 12 weeks, and this was sustained. Hispanic clients reported high cultural competency, supporting a cultural adaptation crafted by the ICT. PMID:25586878
EBT Fidelity Trajectories Across Training Cohorts Using the Interagency Collaborative Team Strategy.
Chaffin, Mark; Hecht, Debra; Aarons, Greg; Fettes, Danielle; Hurlburt, Michael; Ledesma, Karla
2016-03-01
The Interdisciplinary Collaborative Team (ICT) strategy uses front-line providers as adaptation, training and quality control agents for multi-agency EBT implementation. This study tests whether an ICT transmits fidelity to subsequent provider cohorts. SafeCare was implemented by home visitors from multiple community-based agencies contracting with child welfare. Client-reported fidelity trajectories for 5,769 visits, 957 clients and 45 providers were compared using three-level growth models. Provider cohorts trained and live-coached by the ICT attained benchmark fidelity after 12 weeks, and this was sustained. Hispanic clients reported high cultural competency, supporting a cultural adaptation crafted by the ICT.
Butler, Troy; Wildey, Timothy
2018-01-01
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Troy; Wildey, Timothy
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
High fidelity simulations of infrared imagery with animated characters
NASA Astrophysics Data System (ADS)
Näsström, F.; Persson, A.; Bergström, D.; Berggren, J.; Hedström, J.; Allvar, J.; Karlsson, M.
2012-06-01
High fidelity simulations of IR signatures and imagery tend to be slow and do not have effective support for animation of characters. Simplified rendering methods based on computer graphics methods can be used to overcome these limitations. This paper presents a method to combine these tools and produce simulated high fidelity thermal IR data of animated people in terrain. Infrared signatures for human characters have been calculated using RadThermIR. To handle multiple character models, these calculations use a simplified material model for the anatomy and clothing. Weather and temperature conditions match the IR-texture used in the terrain model. The calculated signatures are applied to the animated 3D characters that, together with the terrain model, are used to produce high fidelity IR imagery of people or crowds. For high level animation control and crowd simulations, HLAS (High Level Animation System) has been developed. There are tools available to create and visualize skeleton based animations, but tools that allow control of the animated characters on a higher level, e.g. for crowd simulation, are usually expensive and closed source. We need the flexibility of HLAS to add animation into an HLA enabled sensor system simulation framework.
Modeling and analysis of LWIR signature variability associated with 3D and BRDF effects
NASA Astrophysics Data System (ADS)
Adler-Golden, Steven; Less, David; Jin, Xuemin; Rynes, Peter
2016-05-01
Algorithms for retrieval of surface reflectance, emissivity or temperature from a spectral image almost always assume uniform illumination across the scene and horizontal surfaces with Lambertian reflectance. When these algorithms are used to process real 3-D scenes, the retrieved "apparent" values contain the strong, spatially dependent variations in illumination as well as surface bidirectional reflectance distribution function (BRDF) effects. This is especially problematic with horizontal or near-horizontal viewing, where many observed surfaces are vertical, and where horizontal surfaces can show strong specularity. The goals of this study are to characterize long-wavelength infrared (LWIR) signature variability in a HSI 3-D scene and develop practical methods for estimating the true surface values. We take advantage of synthetic near-horizontal imagery generated with the high-fidelity MultiService Electro-optic Signature (MuSES) model, and compare retrievals of temperature and directional-hemispherical reflectance using standard sky downwelling illumination and MuSES-based non-uniform environmental illumination.
Intermediate Fidelity Closed Brayton Cycle Power Conversion Model
NASA Technical Reports Server (NTRS)
Lavelle, Thomas M.; Khandelwal, Suresh; Owen, Albert K.
2006-01-01
This paper describes the implementation of an intermediate fidelity model of a closed Brayton Cycle power conversion system (Closed Cycle System Simulation). The simulation is developed within the Numerical Propulsion Simulation System architecture using component elements from earlier models. Of particular interest, and power, is the ability of this new simulation system to initiate a more detailed analysis of compressor and turbine components automatically and to incorporate the overall results into the general system simulation.
Initial Integration of Noise Prediction Tools for Acoustic Scattering Effects
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Burley, Casey L.; Tinetti, Ana; Rawls, John W.
2008-01-01
This effort provides an initial glimpse at NASA capabilities available in predicting the scattering of fan noise from a non-conventional aircraft configuration. The Aircraft NOise Prediction Program, Fast Scattering Code, and the Rotorcraft Noise Model were coupled to provide increased fidelity models of scattering effects on engine fan noise sources. The integration of these codes led to the identification of several keys issues entailed in applying such multi-fidelity approaches. In particular, for prediction at noise certification points, the inclusion of distributed sources leads to complications with the source semi-sphere approach. Computational resource requirements limit the use of the higher fidelity scattering code to predict radiated sound pressure levels for full scale configurations at relevant frequencies. And, the ability to more accurately represent complex shielding surfaces in current lower fidelity models is necessary for general application to scattering predictions. This initial step in determining the potential benefits/costs of these new methods over the existing capabilities illustrates a number of the issues that must be addressed in the development of next generation aircraft system noise prediction tools.
Systematic evaluation of implementation fidelity of complex interventions in health and social care
2010-01-01
Background Evaluation of an implementation process and its fidelity can give insight into the 'black box' of interventions. However, a lack of standardized methods for studying fidelity and implementation process have been reported, which might be one reason for the fact that few prior studies in the field of health service research have systematically evaluated interventions' implementation processes. The aim of this project is to systematically evaluate implementation fidelity and possible factors influencing fidelity of complex interventions in health and social care. Methods A modified version of The Conceptual Framework for Implementation Fidelity will be used as a conceptual model for the evaluation. The modification implies two additional moderating factors: context and recruitment. A systematic evaluation process was developed. Multiple case study method is used to investigate implementation of three complex health service interventions. Each case will be investigated in depth and longitudinally, using both quantitative and qualitative methods. Discussion This study is the first attempt to empirically test The Conceptual Framework for Implementation Fidelity. The study can highlight mechanism and factors of importance when implementing complex interventions. Especially the role of the moderating factors on implementation fidelity can be clarified. Trial Registration Supported Employment, SE, among people with severe mental illness -- a randomized controlled trial: NCT00960024. PMID:20815872
NASA Astrophysics Data System (ADS)
Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.
2014-01-01
We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.
Continuous variable quantum optical simulation for time evolution of quantum harmonic oscillators
Deng, Xiaowei; Hao, Shuhong; Guo, Hong; Xie, Changde; Su, Xiaolong
2016-01-01
Quantum simulation enables one to mimic the evolution of other quantum systems using a controllable quantum system. Quantum harmonic oscillator (QHO) is one of the most important model systems in quantum physics. To observe the transient dynamics of a QHO with high oscillation frequency directly is difficult. We experimentally simulate the transient behaviors of QHO in an open system during time evolution with an optical mode and a logical operation system of continuous variable quantum computation. The time evolution of an atomic ensemble in the collective spontaneous emission is analytically simulated by mapping the atomic ensemble onto a QHO. The measured fidelity, which is used for quantifying the quality of the simulation, is higher than its classical limit. The presented simulation scheme provides a new tool for studying the dynamic behaviors of QHO. PMID:26961962
Thermal Protection System Mass Estimating Relationships for Blunt-Body, Earth Entry Spacecraft
NASA Technical Reports Server (NTRS)
Sepka, Steven A.; Samareh, Jamshid A.
2015-01-01
System analysis and design of any entry system must balance the level fidelity for each discipline against the project timeline. One way to inject high fidelity analysis earlier in the design effort is to develop surrogate models for the high-fidelity disciplines. Surrogate models for the Thermal Protection System (TPS) are formulated as Mass Estimating Relationships (MERs). The TPS MERs are presented that predict the amount of TPS necessary for safe Earth entry for blunt-body spacecraft using simple correlations that closely match estimates from NASA's high-fidelity ablation modeling tool, the Fully Implicit Ablation and Thermal Analysis Program (FIAT). These MERs provide a first order estimate for rapid feasibility studies. There are 840 different trajectories considered in this study, and each TPS MER has a peak heating limit. MERs for the vehicle forebody include the ablators Phenolic Impregnated Carbon Ablator (PICA) and Carbon Phenolic atop Advanced Carbon-Carbon. For the aftbody, the materials are Silicone Impregnated Reusable Ceramic Ablator (SIRCA), Acusil II, SLA-561V, and LI-900. The MERs are accurate to within 14% (at one standard deviation) of FIAT prediction, and the most any MER under predicts FIAT TPS thickness is 18.7%. This work focuses on the development of these MERs, the resulting equations, model limitations, and model accuracy.
Open Vehicle Sketch Pad Aircraft Modeling Strategies
NASA Technical Reports Server (NTRS)
Hahn, Andrew S.
2013-01-01
Geometric modeling of aircraft during the Conceptual design phase is very different from that needed for the Preliminary or Detailed design phases. The Conceptual design phase is characterized by the rapid, multi-disciplinary analysis of many design variables by a small engineering team. The designer must walk a line between fidelity and productivity, picking tools and methods with the appropriate balance of characteristics to achieve the goals of the study, while staying within the available resources. Identifying geometric details that are important, and those that are not, is critical to making modeling and methodology choices. This is true for both the low-order analysis methods traditionally used in Conceptual design as well as the highest-order analyses available. This paper will highlight some of Conceptual design's characteristics that drive the designer s choices as well as modeling examples for several aircraft configurations using the open source version of the Vehicle Sketch Pad (Open VSP) aircraft Conceptual design geometry modeler.
NASA Astrophysics Data System (ADS)
Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Subramanian, Aneesh; Weisheimer, Antje; Christensen, Hannah; Juricke, Stephan; Palmer, Tim
2016-04-01
The PRACE Climate SPHINX project investigates the sensitivity of climate simulations to model resolution and stochastic parameterization. The EC-Earth Earth-System Model is used to explore the impact of stochastic physics in 30-years climate integrations as a function of model resolution (from 80km up to 16km for the atmosphere). The experiments include more than 70 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), using RCP8.5 CMIP5 forcing. A total amount of 20 million core hours will be used at end of the project (March 2016) and about 150 TBytes of post-processed data will be available to the climate community. Preliminary results show a clear improvement in the representation of climate variability over the Euro-Atlantic following resolution increase. More specifically, the well-known atmospheric blocking negative bias over Europe is definitely resolved. High resolution runs also show improved fidelity in representation of tropical variability - such as the MJO and its propagation - over the low resolution simulations. It is shown that including stochastic parameterization in the low resolution runs help to improve some of the aspects of the MJO propagation further. These findings show the importance of representing the impact of small scale processes on the large scale climate variability either explicitly (with high resolution simulations) or stochastically (in low resolution simulations).
PARALLEL PERTURBATION MODEL FOR CYCLE TO CYCLE VARIABILITY PPM4CCV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin Mohammed; Som, Sibendu
This code consists of a Fortran 90 implementation of the parallel perturbation model to compute cyclic variability in spark ignition (SI) engines. Cycle-to-cycle variability (CCV) is known to be detrimental to SI engine operation resulting in partial burn and knock, and result in an overall reduction in the reliability of the engine. Numerical prediction of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are required to accurately capture the in-cylinder turbulent flow field, and (ii) CCV is experienced over long timescales and hence the simulations needmore » to be performed for hundreds of consecutive cycles. In the new technique, the strategy is to perform multiple parallel simulations, each of which encompasses 2-3 cycles, by effectively perturbing the simulation parameters such as the initial and boundary conditions. The PPM4CCV code is a pre-processing code and can be coupled with any engine CFD code. PPM4CCV was coupled with Converge CFD code and a 10-time speedup was demonstrated over the conventional multi-cycle LES in predicting the CCV for a motored engine. Recently, the model is also being applied to fired engines including port fuel injected (PFI) and direct injection spark ignition engines and the preliminary results are very encouraging.« less
2012-01-01
Background Prior studies measuring fidelity of complex interventions have mainly evaluated adherence, and not taken factors affecting adherence into consideration. A need for studies that clarify the concept of fidelity and the function of factors moderating fidelity has been emphasized. The aim of the study was to systematically evaluate implementation fidelity and possible factors influencing fidelity of a complex care continuum intervention for frail elderly people. Methods The intervention was a systematization of the collaboration between a nurse with geriatric expertise situated at the emergency department, the hospital ward staff, and a multi-professional team with a case manager in the municipal care services for older people. Implementation was evaluated between September 2008 and May 2010 with observations of work practices, stakeholder interviews, and document analysis according to a modified version of The Conceptual Framework for Implementation Fidelity. Results A total of 16 of the 18 intervention components were to a great extent delivered as planned, while some new components were added to the model. No changes in the frequency or duration of the 18 components were observed, but the dose of the added components varied over time. Changes in fidelity were caused in a complex, interrelated fashion by all the moderating factors in the framework, i.e., context, staff and participant responsiveness, facilitation, recruitment, and complexity. Discussion The Conceptual Framework for Implementation Fidelity was empirically useful and included comprehensive measures of factors affecting fidelity. Future studies should focus on developing the framework with regard to how to investigate relationships between the moderating factors and fidelity over time. Trial registration ClinicalTrials.gov, NCT01260493. PMID:22436121
Berkel, Cady; Mauricio, Anne M; Sandler, Irwin N; Wolchik, Sharlene A; Gallo, Carlos G; Brown, C Hendricks
2017-12-14
This study tests a theoretical cascade model in which multiple dimensions of facilitator delivery predict indicators of participant responsiveness, which in turn lead to improvements in targeted program outcomes. An effectiveness trial of the 10-session New Beginnings Program for divorcing families was implemented in partnership with four county-level family courts. This study included 366 families assigned to the intervention condition who attended at least one session. Independent observers provided ratings of program delivery (i.e., fidelity to the curriculum and process quality). Facilitators reported on parent attendance and parents' competence in home practice of program skills. At pretest and posttest, children reported on parenting and parents reported child mental health. We hypothesized effects of quality on attendance, fidelity and attendance on home practice, and home practice on improvements in parenting and child mental health. Structural Equation Modeling with mediation and moderation analyses were used to test these associations. Results indicated quality was significantly associated with attendance, and attendance moderated the effect of fidelity on home practice. Home practice was a significant mediator of the links between fidelity and improvements in parent-child relationship quality and child externalizing and internalizing problems. Findings provide support for fidelity to the curriculum, process quality, attendance, and home practice as valid predictors of program outcomes for mothers and fathers. Future directions for assessing implementation in community settings are discussed.
Tidal Energy Resource Assessment for McMurdo Station, Antarctica
2016-12-01
highest power coefficient possible, only to provide a high- fidelity data set for a simple geometry turbine model at reasonably high blade chord Reynolds...highest power coefficient possible, only to provide a high-fidelity data set for a simple geometry turbine model at reasonably high blade chord...Reynolds numbers. Tip speed ratio, , is defined as = where is the anglular velocity of the blade and is the
Fidelity assessment of a UH-60A simulation on the NASA Ames vertical motion simulator
NASA Technical Reports Server (NTRS)
Atencio, Adolph, Jr.
1993-01-01
Helicopter handling qualities research requires that a ground-based simulation be a high-fidelity representation of the actual helicopter, especially over the frequency range of the investigation. This experiment was performed to assess the current capability to simulate the UH-60A Black Hawk helicopter on the Vertical Motion Simulator (VMS) at NASA Ames, to develop a methodology for assessing the fidelity of a simulation, and to find the causes for lack of fidelity. The approach used was to compare the simulation to the flight vehicle for a series of tasks performed in flight and in the simulator. The results show that subjective handling qualities ratings from flight to simulator overlap, and the mathematical model matches the UH-60A helicopter very well over the range of frequencies critical to handling qualities evaluation. Pilot comments, however, indicate a need for improvement in the perceptual fidelity of the simulation in the areas of motion and visual cuing. The methodology used to make the fidelity assessment proved useful in showing differences in pilot work load and strategy, but additional work is needed to refine objective methods for determining causes of lack of fidelity.
Solar Sail Spaceflight Simulation
NASA Technical Reports Server (NTRS)
Lisano, Michael; Evans, James; Ellis, Jordan; Schimmels, John; Roberts, Timothy; Rios-Reyes, Leonel; Scheeres, Daniel; Bladt, Jeff; Lawrence, Dale; Piggott, Scott
2007-01-01
The Solar Sail Spaceflight Simulation Software (S5) toolkit provides solar-sail designers with an integrated environment for designing optimal solar-sail trajectories, and then studying the attitude dynamics/control, navigation, and trajectory control/correction of sails during realistic mission simulations. Unique features include a high-fidelity solar radiation pressure model suitable for arbitrarily-shaped solar sails, a solar-sail trajectory optimizer, capability to develop solar-sail navigation filter simulations, solar-sail attitude control models, and solar-sail high-fidelity force models.
NASA Technical Reports Server (NTRS)
Bonfils, Celine J. W.; Santer, Benjamin D.; Phillips, Thomas J.; Marvel, Kate; Leung, L. Ruby; Doutriaux, Charles; Capotondi, Antonietta
2015-01-01
El Niño-Southern Oscillation (ENSO) is an important driver of regional hydroclimate variability through far-reaching teleconnections. This study uses simulations performed with coupled general circulation models (CGCMs) to investigate how regional precipitation in the twenty-first century may be affected by changes in both ENSO-driven precipitation variability and slowly evolving mean rainfall. First, a dominant, time-invariant pattern of canonical ENSO variability (cENSO) is identified in observed SST data. Next, the fidelity with which 33 state-of-the-art CGCMs represent the spatial structure and temporal variability of this pattern (as well as its associated precipitation responses) is evaluated in simulations of twentieth-century climate change. Possible changes in both the temporal variability of this pattern and its associated precipitation teleconnections are investigated in twenty-first-century climate projections. Models with better representation of the observed structure of the cENSO pattern produce winter rainfall teleconnection patterns that are in better accord with twentieth-century observations and more stationary during the twenty-first century. Finally, the model-predicted twenty-first-century rainfall response to cENSO is decomposed into the sum of three terms: 1) the twenty-first-century change in the mean state of precipitation, 2) the historical precipitation response to the cENSO pattern, and 3) a future enhancement in the rainfall response to cENSO, which amplifies rainfall extremes. By examining the three terms jointly, this conceptual framework allows the identification of regions likely to experience future rainfall anomalies that are without precedent in the current climate.
NASA Technical Reports Server (NTRS)
Bonfils, Celine J. W.; Santer, Benjamin D.; Phillips, Thomas J.; Marvel, Kate; Leung, L. Ruby; Doutriaux, Charles; Capotondi, Antonietta
2015-01-01
The El Nino-Southern Oscillation (ENSO) is an important driver of regional hydroclimate variability through far-reaching teleconnections. This study uses simulations performed with Coupled General Circulation Models (CGCMs) to investigate how regional precipitation in the 21st century may be affected by changes in both ENSO-driven precipitation variability and slowly-evolving mean rainfall. First, a dominant, time-invariant pattern of canonical ENSO variability (cENSO) is identified in observed SST data. Next, the fidelity with which 33 state-of-the-art CGCMs represent the spatial structure and temporal variability of this pattern (as well as its associated precipitation responses) is evaluated in simulations of 20th century climate change. Possible changes in both the temporal variability of this pattern and its associated precipitation teleconnections are investigated in 21st century climate projections. Models with better representation of the observed structure of the cENSO pattern produce winter rainfall teleconnection patterns that are in better accord with 20th century observations and more stationary during the 21st century. Finally, the model-predicted 21st century rainfall response to cENSO is decomposed into the sum of three terms: 1) the 21st century change in the mean state of precipitation; 2) the historical precipitation response to the cENSO pattern; and 3) a future enhancement in the rainfall response to cENSO, which amplifies rainfall extremes. By examining the three terms jointly, this conceptual framework allows the identification of regions likely to experience future rainfall anomalies that are without precedent in the current climate.
Controlling the transmitted information of a multi-photon interacting with a single-Cooper pair box
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadry, Heba, E-mail: hkadry1@yahoo.com; Abdel-Aty, Abdel-Haleem, E-mail: hkadry1@yahoo.com; Zakaria, Nordin, E-mail: hkadry1@yahoo.com
2014-10-24
We study a model of a multi-photon interaction of a single Cooper pair box with a cavity field. The exchange of the information using this system is studied. We quantify the fidelity of the transmitted information. The effect of the system parameters (detuning parameter, field photons, state density and mean photon number) in the fidelity of the transmitted information is investigated. We found that the fidelity of the transmitted information can be controlled using the system parameters.
NASA Technical Reports Server (NTRS)
Radovcich, N. A.; Gentile, D. P.
1989-01-01
A NASTRAN bulk dataset preprocessor was developed to facilitate the integration of filamentary composite laminate properties into composite structural resizing for stiffness requirements. The NASCOMP system generates delta stiffness and delta mass matrices for input to the flutter derivative program. The flutter baseline analysis, derivative calculations, and stiffness and mass matrix updates are controlled by engineer defined processes under an operating system called CBUS. A multi-layered design variable grid system permits high fidelity resizing without excessive computer cost. The NASCOMP system uses ply layup drawings for basic input. The aeroelastic resizing for stiffness capability was used during an actual design exercise.
Adaptive hybrid optimal quantum control for imprecisely characterized systems.
Egger, D J; Wilhelm, F K
2014-06-20
Optimal quantum control theory carries a huge promise for quantum technology. Its experimental application, however, is often hindered by imprecise knowledge of the input variables, the quantum system's parameters. We show how to overcome this by adaptive hybrid optimal control, using a protocol named Ad-HOC. This protocol combines open- and closed-loop optimal control by first performing a gradient search towards a near-optimal control pulse and then an experimental fidelity estimation with a gradient-free method. For typical settings in solid-state quantum information processing, adaptive hybrid optimal control enhances gate fidelities by an order of magnitude, making optimal control theory applicable and useful.
Implementation of a Smeared Crack Band Model in a Micromechanics Framework
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Bednarcyk, Brett A.; Waas, Anthony M.; Arnold, Steven M.
2012-01-01
The smeared crack band theory is implemented within the generalized method of cells and high-fidelity generalized method of cells micromechanics models to capture progressive failure within the constituents of a composite material while retaining objectivity with respect to the size of the discretization elements used in the model. An repeating unit cell containing 13 randomly arranged fibers is modeled and subjected to a combination of transverse tension/compression and transverse shear loading. The implementation is verified against experimental data (where available), and an equivalent finite element model utilizing the same implementation of the crack band theory. To evaluate the performance of the crack band theory within a repeating unit cell that is more amenable to a multiscale implementation, a single fiber is modeled with generalized method of cells and high-fidelity generalized method of cells using a relatively coarse subcell mesh which is subjected to the same loading scenarios as the multiple fiber repeating unit cell. The generalized method of cells and high-fidelity generalized method of cells models are validated against a very refined finite element model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.
2015-08-11
We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We alsomore » consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.« less
South Asian summer monsoon breaks: Process-based diagnostics in HIRHAM5
NASA Astrophysics Data System (ADS)
Hanf, Franziska S.; Annamalai, H.; Rinke, Annette; Dethloff, Klaus
2017-05-01
This study assesses the ability of a high-resolution downscaling simulation with the regional climate model (RCM) HIRHAM5 in capturing the monsoon basic state and boreal summer intraseasonal variability (BSISV) over South Asia with focus on moist and radiative processes during 1979-2012. A process-based vertically integrated moist static energy (MSE) budget is performed to understand the model's fidelity in representing leading processes that govern the monsoon breaks over continental India. In the climatology (June-September) HIRHAM5 simulates a dry bias over central India in association with descent throughout the free troposphere. Sources of dry bias are interpreted as (i) near-equatorial Rossby wave response forced by excess rainfall over the southern Bay of Bengal promotes anomalous descent to its northwest and (ii) excessive rainfall over near-equatorial Arabian Sea and Bay of Bengal anchor a "local Hadley-type" circulation with descent anomalies over continental India. Compared with observations HIRHAM5 captures the leading processes that account for breaks, although with generally reduced amplitudes over central India. In the model too, anomalous dry advection and net radiative cooling are responsible for the initiation and maintenance of breaks, respectively. However, weaker contributions of all adiabatic MSE budget terms, and an inconsistent relationship between negative rainfall anomalies and radiative cooling reveals shortcomings in HIRHAM5's moisture-radiation interaction. Our study directly implies that process-based budget diagnostics are necessary, apart from just checking the northward propagation feature to examine RCM's fidelity to simulate BSISV.
NASA Technical Reports Server (NTRS)
Levison, William H.
1988-01-01
This study explored application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues. The model was applied to two data bases: (1) a NASA ground based simulation of an air-to-air tracking task in which nonvisual cueing devices were explored, and (2) a ground based and inflight study performed by the Calspan Corporation to explore the effects of simulator delay on attitude tracking performance. The model predicted the major performance trends obtained in both studies. A combined analytical and experimental procedure for exploring simulator fidelity issues is outlined.
Feasibility of training athletes for high-pressure situations using virtual reality.
Stinson, Cheryl; Bowman, Doug A
2014-04-01
Virtual reality (VR) has been successfully applied to a broad range of training domains; however, to date there is little research investigating its benefits for sport psychology training. We hypothesized that using high-fidelity VR systems to display realistic 3D sport environments could trigger anxiety, allowing resilience-training systems to prepare athletes for real-world, highpressure situations. In this work we investigated the feasibility and usefulness of using VR for sport psychology training. We developed a virtual soccer goalkeeping application for the Virginia Tech Visionarium VisCube (a CAVE-like display system), in which users defend against simulated penalty kicks using their own bodies. Using the application, we ran a controlled, within-subjects experiment with three independent variables: known anxiety triggers, field of regard, and simulation fidelity. The results demonstrate that a VR sport-oriented system can induce increased anxiety (physiological and subjective measures) compared to a baseline condition. There were a number of main effects and interaction effects for all three independent variables in terms of the subjective measures of anxiety. Both known anxiety triggers and simulation fidelity had a direct relationship to anxiety, while field of regard had an inverse relationship. Overall, the results demonstrate great potential for VR sport psychology training systems; however, further research is needed to determine if training in a VR environment can lead to long-term reduction in sport-induced anxiety.
Development of Ku-band rendezvous radar tracking and acquisition simulation programs
NASA Technical Reports Server (NTRS)
1986-01-01
The fidelity of the Space Shuttle Radar tracking simulation model was improved. The data from the Shuttle Orbiter Radar Test and Evaluation (SORTE) program experiments performed at the White Sands Missile Range (WSMR) were reviewed and analyzed. The selected flight rendezvous radar data was evaluated. Problems with the Inertial Line-of-Sight (ILOS) angle rate tracker were evaluated using the improved fidelity angle rate tracker simulation model.
Aerodynamic Simulation of Ice Accretion on Airfoils
NASA Technical Reports Server (NTRS)
Broeren, Andy P.; Addy, Harold E., Jr.; Bragg, Michael B.; Busch, Greg T.; Montreuil, Emmanuel
2011-01-01
This report describes recent improvements in aerodynamic scaling and simulation of ice accretion on airfoils. Ice accretions were classified into four types on the basis of aerodynamic effects: roughness, horn, streamwise, and spanwise ridge. The NASA Icing Research Tunnel (IRT) was used to generate ice accretions within these four types using both subscale and full-scale models. Large-scale, pressurized windtunnel testing was performed using a 72-in.- (1.83-m-) chord, NACA 23012 airfoil model with high-fidelity, three-dimensional castings of the IRT ice accretions. Performance data were recorded over Reynolds numbers from 4.5 x 10(exp 6) to 15.9 x 10(exp 6) and Mach numbers from 0.10 to 0.28. Lower fidelity ice-accretion simulation methods were developed and tested on an 18-in.- (0.46-m-) chord NACA 23012 airfoil model in a small-scale wind tunnel at a lower Reynolds number. The aerodynamic accuracy of the lower fidelity, subscale ice simulations was validated against the full-scale results for a factor of 4 reduction in model scale and a factor of 8 reduction in Reynolds number. This research has defined the level of geometric fidelity required for artificial ice shapes to yield aerodynamic performance results to within a known level of uncertainty and has culminated in a proposed methodology for subscale iced-airfoil aerodynamic simulation.
Impact and Crashworthiness Characteristics of Venera Type Landers for Future Venus Missions
NASA Technical Reports Server (NTRS)
Schroeder, Kevin; Bayandor, Javid; Samareh, Jamshid
2016-01-01
In this paper an in-depth investigation of the structural design of the Venera 9-14 landers is explored. A complete reverse engineering of the Venera lander was required. The lander was broken down into its fundamental components and analyzed. This provided in-sights into the hidden features of the design. A trade study was performed to find the sensitivity of the lander's overall mass to the variation of several key parameters. For the lander's legs, the location, length, configuration, and number are all parameterized. The size of the impact ring, the radius of the drag plate, and other design features are also parameterized, and all of these features were correlated to the change of mass of the lander. A multi-fidelity design tool used for further investigation of the parameterized lander was developed. As a design was passed down from one level to the next, the fidelity, complexity, accuracy, and run time of the model increased. The low-fidelity model was a highly nonlinear analytical model developed to rapidly predict the mass of each design. The medium and high fidelity models utilized an explicit finite element framework to investigate the performance of various landers upon impact with the surface under a range of landing conditions. This methodology allowed for a large variety of designs to be investigated by the analytical model, which identified designs with the optimum structural mass to payload ratio. As promising designs emerged, investigations in the following higher fidelity models were focused on establishing their reliability and crashworthiness. The developed design tool efficiently modelled and tested the best concepts for any scenario based on critical Venusian mission requirements and constraints. Through this program, the strengths and weaknesses inherent in the Venera-Type landers were thoroughly investigated. Key features identified for the design of robust landers will be used as foundations for the development of the next generation of landers for future exploration missions to Venus.
Measuring Implementation Fidelity in a Community-Based Parenting Intervention
Breitenstein, Susan M.; Fogg, Louis; Garvey, Christine; Hill, Carri; Resnick, Barbara; Gross, Deborah
2012-01-01
Background Establishing the feasibility and validity of implementation fidelity monitoring strategies is an important methodological step in implementing evidence-based interventions on a large scale. Objectives The objective of the study was to examine the reliability and validity of the Fidelity Checklist, a measure designed to assess group leader adherence and competence delivering a parent training intervention (the Chicago Parent Program) in child care centers serving low-income families. Method The sample included 9 parent groups (12 group sessions each), 12 group leaders, and 103 parents. Independent raters reviewed 106 audiotaped parent group sessions and coded group leaders’ fidelity on the Adherence and Competence Scales of the Fidelity Checklist. Group leaders completed self-report adherence checklists and a measure of parent engagement in the intervention. Parents completed measures of consumer satisfaction and child behavior. Results High interrater agreement (Adherence Scale = 94%, Competence Scale = 85%) and adequate intraclass correlation coefficients (Adherence Scale = .69, Competence Scale = .91) were achieved for the Fidelity Checklist. Group leader adherence changed over time, but competence remained stable. Agreement between group leader self-report and independent ratings on the Adherence Scale was 85%; disagreements were more frequently due to positive bias in group leader self-report. Positive correlations were found between group leader adherence and parent attendance and engagement in the intervention and between group leader competence and parent satisfaction. Although child behavior problems improved, improvements were not related to fidelity. Discussion The results suggest that the Fidelity Checklist is a feasible, reliable, and valid measure of group leader implementation fidelity in a group-based parenting intervention. Future research will be focused on testing the Fidelity Checklist with diverse and larger samples and generalizing to other group-based interventions using a similar intervention model. PMID:20404777
Butel, Jean; Braun, Kathryn L; Novotny, Rachel; Acosta, Mark; Castro, Rose; Fleming, Travis; Powers, Julianne; Nigg, Claudio R
2015-12-01
Addressing complex chronic disease prevention, like childhood obesity, requires a multi-level, multi-component culturally relevant approach with broad reach. Models are lacking to guide fidelity monitoring across multiple levels, components, and sites engaged in such interventions. The aim of this study is to describe the fidelity-monitoring approach of The Children's Healthy Living (CHL) Program, a multi-level multi-component intervention in five Pacific jurisdictions. A fidelity-monitoring rubric was developed. About halfway during the intervention, community partners were randomly selected and interviewed independently by local CHL staff and by Coordinating Center representatives to assess treatment fidelity. Ratings were compared and discussed by local and Coordinating Center staff. There was good agreement between the teams (Kappa = 0.50, p < 0.001), and intervention improvement opportunities were identified through data review and group discussion. Fidelity for the multi-level, multi-component, multi-site CHL intervention was successfully assessed, identifying adaptations as well as ways to improve intervention delivery prior to the end of the intervention.
Inadequacy representation of flamelet-based RANS model for turbulent non-premixed flame
NASA Astrophysics Data System (ADS)
Lee, Myoungkyu; Oliver, Todd; Moser, Robert
2017-11-01
Stochastic representations for model inadequacy in RANS-based models of non-premixed jet flames are developed and explored. Flamelet-based RANS models are attractive for engineering applications relative to higher-fidelity methods because of their low computational costs. However, the various assumptions inherent in such models introduce errors that can significantly affect the accuracy of computed quantities of interest. In this work, we develop an approach to represent the model inadequacy of the flamelet-based RANS model. In particular, we pose a physics-based, stochastic PDE for the triple correlation of the mixture fraction. This additional uncertain state variable is then used to construct perturbations of the PDF for the instantaneous mixture fraction, which is used to obtain an uncertain perturbation of the flame temperature. A hydrogen-air non-premixed jet flame is used to demonstrate the representation of the inadequacy of the flamelet-based RANS model. This work was supported by DARPA-EQUiPS(Enabling Quantification of Uncertainty in Physical Systems) program.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Bednarcyk, Brett A.; Waas, Anthony M.; Arnold, Steven M.
2012-01-01
The smeared crack band theory is implemented within the generalized method of cells and high-fidelity generalized method of cells micromechanics models to capture progressive failure within the constituents of a composite material while retaining objectivity with respect to the size of the discretization elements used in the model. An repeating unit cell containing 13 randomly arranged fibers is modeled and subjected to a combination of transverse tension/compression and transverse shear loading. The implementation is verified against experimental data (where available), and an equivalent finite element model utilizing the same implementation of the crack band theory. To evaluate the performance of the crack band theory within a repeating unit cell that is more amenable to a multiscale implementation, a single fiber is modeled with generalized method of cells and high-fidelity generalized method of cells using a relatively coarse subcell mesh which is subjected to the same loading scenarios as the multiple fiber repeating unit cell. The generalized method of cells and high-fidelity generalized method of cells models are validated against a very refined finite element model.
NASA Astrophysics Data System (ADS)
Kattke, K. J.; Braun, R. J.
2011-08-01
A novel, highly integrated tubular SOFC system intended for small-scale power is characterized through a series of sensitivity analyses and parametric studies using a previously developed high-fidelity simulation tool. The high-fidelity tubular SOFC system modeling tool is utilized to simulate system-wide performance and capture the thermofluidic coupling between system components. Stack performance prediction is based on 66 anode-supported tubular cells individually evaluated with a 1-D electrochemical cell model coupled to a 3-D computational fluid dynamics model of the cell surroundings. Radiation is the dominate stack cooling mechanism accounting for 66-92% of total heat loss at the outer surface of all cells at baseline conditions. An average temperature difference of nearly 125 °C provides a large driving force for radiation heat transfer from the stack to the cylindrical enclosure surrounding the tube bundle. Consequently, cell power and voltage disparities within the stack are largely a function of the radiation view factor from an individual tube to the surrounding stack can wall. The cells which are connected in electrical series, vary in power from 7.6 to 10.8 W (with a standard deviation, σ = 1.2 W) and cell voltage varies from 0.52 to 0.73 V (with σ = 81 mV) at the simulation baseline conditions. It is observed that high cell voltage and power outputs directly correspond to tubular cells with the smallest radiation view factor to the enclosure wall, and vice versa for tubes exhibiting low performance. Results also reveal effective control variables and operating strategies along with an improved understanding of the effect that design modifications have on system performance. By decreasing the air flowrate into the system by 10%, the stack can wall temperature increases by about 6% which increases the minimum cell voltage to 0.62 V and reduces deviations in cell power and voltage by 31%. A low baseline fuel utilization is increased by decreasing the fuel flowrate and by increasing the stack current demand. Simulation results reveal fuel flow as a poor control variable because excessive tail-gas combustor temperatures limit fuel flow to below 110% of the baseline flowrate. Additionally, system efficiency becomes inversely proportional to fuel utilization over the practical fuel flow range. Stack current is found to be an effective control variable in this type of system because system efficiency becomes directly proportional to fuel utilization. Further, the integrated system acts to dampen temperature spikes when fuel utilization is altered by varying current demand. Radiation remains the dominate heat transfer mechanism within the stack even if stack surfaces are polished lowering emissivities to 0.2. Furthermore, the sensitivity studies point to an optimal system insulation thickness that balances the overall system volume and total conductive heat loss.
Using Empirical Orthogonal Teleconnections to Analyze Interannual Precipitation Variability in China
NASA Astrophysics Data System (ADS)
Stephan, C.; Klingaman, N. P.; Vidale, P. L.; Turner, A. G.; Demory, M. E.; Guo, L.
2017-12-01
Interannual rainfall variability in China affects agriculture, infrastructure and water resource management. A consistent and objective method, Empirical Orthogonal Teleconnection (EOT) analysis, is applied to precipitation observations over China in all seasons. Instead of maximizing the explained space-time variance, the method identifies regions in China that best explain the temporal variability in domain-averaged rainfall. It produces known teleconnections, that include high positive correlations with ENSO in eastern China in winter, along the Yangtze River in summer, and in southeast China during spring. New findings include that variability along the southeast coast in winter, in the Yangtze valley in spring, and in eastern China in autumn, are associated with extratropical Rossby wave trains. The same analysis is applied to six climate simulations of the Met Office Unified Model with and without air-sea coupling and at various horizontal resolutions of 40, 90 and 200 km. All simulations reproduce the observed patterns of interannual rainfall variability in winter, spring and autumn; the leading pattern in summer is present in all but one simulation. However, only in two simulations are all patterns associated with the observed physical mechanism. Coupled simulations capture more observed patterns of variability and associate more of them with the correct physical mechanism, compared to atmosphere-only simulations at the same resolution. Finer resolution does not improve the fidelity of these patterns or their associated mechanisms. Evaluating climate models by only geographical distribution of mean precipitation and its interannual variance is insufficient; attention must be paid to associated mechanisms.
NASA Technical Reports Server (NTRS)
Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo
2015-01-01
Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.
NASA Astrophysics Data System (ADS)
Goupil, Ph.; Puyou, G.
2013-12-01
This paper presents a high-fidelity generic twin engine civil aircraft model developed by Airbus for advanced flight control system research. The main features of this benchmark are described to make the reader aware of the model complexity and representativeness. It is a complete representation including the nonlinear rigid-body aircraft model with a full set of control surfaces, actuator models, sensor models, flight control laws (FCL), and pilot inputs. Two applications of this benchmark in the framework of European projects are presented: FCL clearance using optimization and advanced fault detection and diagnosis (FDD).
NASA Technical Reports Server (NTRS)
Ellison, Donald H.; Englander, Jacob A.; Conway, Bruce A.
2017-01-01
The multiple gravity assist low-thrust (MGALT) trajectory model combines the medium-fidelity Sims-Flanagan bounded-impulse transcription with a patched-conics flyby model and is an important tool for preliminary trajectory design. While this model features fast state propagation via Keplers equation and provides a pleasingly accurate estimation of the total mass budget for the eventual flight suitable integrated trajectory it does suffer from one major drawback, namely its temporal spacing of the control nodes. We introduce a variant of the MGALT transcription that utilizes the generalized anomaly from the universal formulation of Keplers equation as a decision variable in addition to the trajectory phase propagation time. This results in two improvements over the traditional model. The first is that the maneuver locations are equally spaced in generalized anomaly about the orbit rather than time. The second is that the Kepler propagator now has the generalized anomaly as its independent variable instead of time and thus becomes an iteration-free propagation method. The new algorithm is outlined, including the impact that this has on the computation of Jacobian entries for numerical optimization, and a motivating application problem is presented that illustrates the improvements that this model has over the traditional MGALT transcription.
NASA Technical Reports Server (NTRS)
Ellison, Donald H.; Englander, Jacob A.; Conway, Bruce A.
2017-01-01
The multiple gravity assist low-thrust (MGALT) trajectory model combines the medium-fidelity Sims-Flanagan bounded-impulse transcription with a patched-conics flyby model and is an important tool for preliminary trajectory design. While this model features fast state propagation via Kepler's equation and provides a pleasingly accurate estimation of the total mass budget for the eventual flight-suitable integrated trajectory it does suffer from one major drawback, namely its temporal spacing of the control nodes. We introduce a variant of the MGALT transcription that utilizes the generalized anomaly from the universal formulation of Kepler's equation as a decision variable in addition to the trajectory phase propagation time. This results in two improvements over the traditional model. The first is that the maneuver locations are equally spaced in generalized anomaly about the orbit rather than time. The second is that the Kepler propagator now has the generalized anomaly as its independent variable instead of time and thus becomes an iteration-free propagation method. The new algorithm is outlined, including the impact that this has on the computation of Jacobian entries for numerical optimization, and a motivating application problem is presented that illustrates the improvements that this model has over the traditional MGALT transcription.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
Schmied, Emily; Parada, Humberto; Horton, Lucy; Ibarra, Leticia; Ayala, Guadalupe
2015-10-01
Entre Familia: Reflejos de Salud was a successful family-based randomized controlled trial designed to improve dietary behaviors and intake among U.S. Latino families, specifically fruit and vegetable intake. The novel intervention design merged a community health worker (promotora) model with an entertainment-education component. This process evaluation examined intervention implementation and assessed relationships between implementation factors and dietary change. Participants included 180 mothers randomized to an intervention condition. Process evaluation measures were obtained from participant interviews and promotora notes and included fidelity, dose delivered (i.e., minutes of promotora in-person contact with families, number of promotora home visits), and dose received (i.e., participant use of and satisfaction with intervention materials). Outcome variables included changes in vegetable intake and the use of behavioral strategies to increase dietary fiber and decrease dietary fat intake. Participant satisfaction was high, and fidelity was achieved; 87.5% of families received the planned number of promotora home visits. In the multivariable model, satisfaction with intervention materials predicted more frequent use of strategies to increase dietary fiber (p ≤ .01). Trends suggested that keeping families in the prescribed intervention timeline and obtaining support from other social network members through sharing of program materials may improve changes. Study findings elucidate the relationship between specific intervention processes and dietary changes. © 2015 Society for Public Health Education.
Limited data tomographic image reconstruction via dual formulation of total variation minimization
NASA Astrophysics Data System (ADS)
Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong
2011-03-01
The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.
The human factors of workstation telepresence
NASA Technical Reports Server (NTRS)
Smith, Thomas J.; Smith, Karl U.
1990-01-01
The term workstation telepresence has been introduced to describe human-telerobot compliance, which enables the human operator to effectively project his/her body image and behavioral skills to control of the telerobot itself. Major human-factors considerations for establishing high fidelity workstation telepresence during human-telerobot operation are discussed. Telerobot workstation telepresence is defined by the proficiency and skill with which the operator is able to control sensory feedback from direct interaction with the workstation itself, and from workstation-mediated interaction with the telerobot. Numerous conditions influencing such control have been identified. This raises the question as to what specific factors most critically influence the realization of high fidelity workstation telepresence. The thesis advanced here is that perturbations in sensory feedback represent a major source of variability in human performance during interactive telerobot operation. Perturbed sensory feedback research over the past three decades has established that spatial transformations or temporal delays in sensory feedback engender substantial decrements in interactive task performance, which training does not completely overcome. A recently developed social cybernetic model of human-computer interaction can be used to guide this approach, based on computer-mediated tracking and control of sensory feedback. How the social cybernetic model can be employed for evaluating the various modes, patterns, and integrations of interpersonal, team, and human-computer interactions which play a central role is workstation telepresence are discussed.
Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation
NASA Astrophysics Data System (ADS)
Hatten, Noble; Russell, Ryan P.
2017-12-01
A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.
A real time Pegasus propulsion system model for VSTOL piloted simulation evaluation
NASA Technical Reports Server (NTRS)
Mihaloew, J. R.; Roth, S. P.; Creekmore, R.
1981-01-01
A real time propulsion system modeling technique suitable for use in man-in-the-loop simulator studies was developd. This technique provides the system accuracy, stability, and transient response required for integrated aircraft and propulsion control system studies. A Pegasus-Harrier propulsion system was selected as a baseline for developing mathematical modeling and simulation techniques for VSTOL. Initially, static and dynamic propulsion system characteristics were modeled in detail to form a nonlinear aerothermodynamic digital computer simulation of a Pegasus engine. From this high fidelity simulation, a real time propulsion model was formulated by applying a piece-wise linear state variable methodology. A hydromechanical and water injection control system was also simulated. The real time dynamic model includes the detail and flexibility required for the evaluation of critical control parameters and propulsion component limits over a limited flight envelope. The model was programmed for interfacing with a Harrier aircraft simulation. Typical propulsion system simulation results are presented.
High-Fidelity Roadway Modeling and Simulation
NASA Technical Reports Server (NTRS)
Wang, Jie; Papelis, Yiannis; Shen, Yuzhong; Unal, Ozhan; Cetin, Mecit
2010-01-01
Roads are an essential feature in our daily lives. With the advances in computing technologies, 2D and 3D road models are employed in many applications, such as computer games and virtual environments. Traditional road models were generated by professional artists manually using modeling software tools such as Maya and 3ds Max. This approach requires both highly specialized and sophisticated skills and massive manual labor. Automatic road generation based on procedural modeling can create road models using specially designed computer algorithms or procedures, reducing the tedious manual editing needed for road modeling dramatically. But most existing procedural modeling methods for road generation put emphasis on the visual effects of the generated roads, not the geometrical and architectural fidelity. This limitation seriously restricts the applicability of the generated road models. To address this problem, this paper proposes a high-fidelity roadway generation method that takes into account road design principles practiced by civil engineering professionals, and as a result, the generated roads can support not only general applications such as games and simulations in which roads are used as 3D assets, but also demanding civil engineering applications, which requires accurate geometrical models of roads. The inputs to the proposed method include road specifications, civil engineering road design rules, terrain information, and surrounding environment. Then the proposed method generates in real time 3D roads that have both high visual and geometrical fidelities. This paper discusses in details the procedures that convert 2D roads specified in shape files into 3D roads and civil engineering road design principles. The proposed method can be used in many applications that have stringent requirements on high precision 3D models, such as driving simulations and road design prototyping. Preliminary results demonstrate the effectiveness of the proposed method.
A Comparison of Metallic, Composite and Nanocomposite Optimal Transonic Transport Wings
NASA Technical Reports Server (NTRS)
Kennedy, Graeme J.; Kenway, Gaetan K. W.; Martins, Joaquim R. R.
2014-01-01
Current and future composite material technologies have the potential to greatly improve the performance of large transport aircraft. However, the coupling between aerodynamics and structures makes it challenging to design optimal flexible wings, and the transonic flight regime requires high fidelity computational models. We address these challenges by solving a series of high-fidelity aerostructural optimization problems that explore the design space for the wing of a large transport aircraft. We consider three different materials: aluminum, carbon-fiber reinforced composites and an hypothetical composite based on carbon nanotubes. The design variables consist of both aerodynamic shape (including span), structural sizing, and ply angle fractions in the case of composites. Pareto fronts with respect to structural weight and fuel burn are generated. The wing performance in each case is optimized subject to stress and buckling constraints. We found that composite wings consistently resulted in lower fuel burn and lower structural weight, and that the carbon nanotube composite did not yield the increase in performance one would expect from a material with such outstanding properties. This indicates that there might be diminishing returns when it comes to the application of advanced materials to wing design, requiring further investigation.
Intervention Fidelity in Family-Based Prevention Counseling for Adolescent Problem Behaviors
ERIC Educational Resources Information Center
Hogue, Aaron; Liddle, Howard A.; Singer, Alisa; Leckrone, Jodi
2005-01-01
This study examined fidelity in multidimensional family prevention (MDFP), a family-based prevention counseling model for adolescents at high risk for substance abuse and related behavior problems, in comparison to two empirically based treatments for adolescent drug abuse: multidimensional family therapy (MDFT) and cognitive-behavioral therapy…
A "Common Factors" Approach to Developing Culturally Tailored HIV Prevention Interventions
ERIC Educational Resources Information Center
Owczarzak, Jill; Phillips, Sarah D.; Filippova, Olga; Alpatova, Polina; Mazhnaya, Alyona; Zub, Tatyana; Aleksanyan, Ruzanna
2016-01-01
The current dominant model of HIV prevention intervention dissemination involves packaging interventions developed in one context, training providers to implement that specific intervention, and evaluating the extent to which providers implement it with fidelity. Research shows that providers rarely implement these programs with fidelity due to…
ARC integration into the NEAMS Workbench
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stauff, N.; Gaughan, N.; Kim, T.
2017-01-01
One of the objectives of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Integration Product Line (IPL) is to facilitate the deployment of the high-fidelity codes developed within the program. The Workbench initiative was launched in FY-2017 by the IPL to facilitate the transition from conventional tools to high fidelity tools. The Workbench provides a common user interface for model creation, real-time validation, execution, output processing, and visualization for integrated codes.
Ellis, Deborah A; Berio, Heidi; Carcone, April Idalski; Naar-King, Sylvie
2012-01-01
Investigate effect of baseline motivation for change on treatment fidelity, therapeutic alliance, treatment dose, and treatment outcome in a randomized controlled trial of family therapy for youth with poorly controlled diabetes. Seventy-four adolescents and caregivers completed measures of motivation for change. Measures of fidelity, alliance, dose, and youth health status were collected. Structural equation modeling was used to test the direct and indirect effects of motivation on treatment outcomes. Parent motivation was significantly related to alliance and fidelity. Only alliance was significantly related to posttreatment metabolic control. In adolescent models, only motivation was significantly related to alliance. In both models, motivation had a significant indirect effect on metabolic control through alliance. Findings demonstrate the importance of parent and youth initial motivational status and treatment alliance to treatment outcome among youth with poorly controlled diabetes. Additional research on treatment techniques that promote motivation for change is needed.
Paracrine communication maximizes cellular response fidelity in wound signaling
Handly, L Naomi; Pilko, Anna; Wollman, Roy
2015-01-01
Population averaging due to paracrine communication can arbitrarily reduce cellular response variability. Yet, variability is ubiquitously observed, suggesting limits to paracrine averaging. It remains unclear whether and how biological systems may be affected by such limits of paracrine signaling. To address this question, we quantify the signal and noise of Ca2+ and ERK spatial gradients in response to an in vitro wound within a novel microfluidics-based device. We find that while paracrine communication reduces gradient noise, it also reduces the gradient magnitude. Accordingly we predict the existence of a maximum gradient signal to noise ratio. Direct in vitro measurement of paracrine communication verifies these predictions and reveals that cells utilize optimal levels of paracrine signaling to maximize the accuracy of gradient-based positional information. Our results demonstrate the limits of population averaging and show the inherent tradeoff in utilizing paracrine communication to regulate cellular response fidelity. DOI: http://dx.doi.org/10.7554/eLife.09652.001 PMID:26448485
Clark, Daniel E.; Koenen, Kiana K. G.; Whitney, Jillian J.; MacKenzie, Kenneth G.; DeStefano, Stephen
2016-01-01
While the breeding ecology of gulls (Laridae) has been well studied, their movements and spatial organization during the non-breeding season is poorly understood. The seasonal movements, winter-site fidelity, and site persistence of Ring-billed (Larus delawarensis) and Herring (L. argentatus) gulls to wintering areas were studied from 2008–2012. Satellite transmitters were deployed on Ring-billed Gulls (n = 21) and Herring Gulls (n = 14). Ten Ring-billed and six Herring gulls were tracked over multiple winters and > 300 wing-tagged Ring-billed Gulls were followed to determine winter-site fidelity and persistence. Home range overlap for individuals between years ranged between 0–1.0 (95% minimum convex polygon) and 0.31–0.79 (kernel utilization distributions). Ringbilled and Herring gulls remained at local wintering sites during the non-breeding season from 20–167 days and 74–161 days, respectively. The probability of a tagged Ring-billed Gull returning to the same site in subsequent winters was high; conversely, there was a low probability of a Ring-billed Gull returning to a different site. Ring-billed and Herring gulls exhibited high winter-site fidelity, but exhibited variable site persistence during the winter season, leading to a high probability of encountering the same individuals in subsequent winters.
NASA Astrophysics Data System (ADS)
Doyle, Thomas K.; Haberlin, Damien; Clohessy, Jim; Bennison, Ashley; Jessopp, Mark
2017-04-01
For many marine migratory fish, comparatively little is known about the movement of individuals rather than the population. Yet, such individual-based movement data is vitally important to understand variability in migratory strategies and fidelity to foraging locations. A case in point is the economically important European sea bass (Dicentrarchus labrax L.) that inhabits coastal waters during the summer months before migrating offshore to spawn and overwinter. Beyond this broad generalisation we have very limited information on the movements of individuals at coastal foraging grounds. We used acoustic telemetry to track the summer movements and seasonal migrations of individual sea bass in a large tidally and estuarine influenced coastal environment. We found that the vast majority of tagged sea bass displayed long-term residency (mean, 167 days) and inter-annual fidelity (93% return rate) to specific areas. We describe individual fish home ranges of 3 km or less, and while fish clearly had core resident areas, there was movement of fish between closely located receivers. The combination of inter-annual fidelity to localised foraging areas makes sea bass very susceptible to local depletion; however, the designation of protected areas for sea bass may go a long way to ensuring the sustainability of this species.
Doyle, Thomas K.; Haberlin, Damien; Clohessy, Jim; Bennison, Ashley; Jessopp, Mark
2017-01-01
For many marine migratory fish, comparatively little is known about the movement of individuals rather than the population. Yet, such individual-based movement data is vitally important to understand variability in migratory strategies and fidelity to foraging locations. A case in point is the economically important European sea bass (Dicentrarchus labrax L.) that inhabits coastal waters during the summer months before migrating offshore to spawn and overwinter. Beyond this broad generalisation we have very limited information on the movements of individuals at coastal foraging grounds. We used acoustic telemetry to track the summer movements and seasonal migrations of individual sea bass in a large tidally and estuarine influenced coastal environment. We found that the vast majority of tagged sea bass displayed long-term residency (mean, 167 days) and inter-annual fidelity (93% return rate) to specific areas. We describe individual fish home ranges of 3 km or less, and while fish clearly had core resident areas, there was movement of fish between closely located receivers. The combination of inter-annual fidelity to localised foraging areas makes sea bass very susceptible to local depletion; however, the designation of protected areas for sea bass may go a long way to ensuring the sustainability of this species. PMID:28374772
Ma, H. -Y.; Chuang, C. C.; Klein, S. A.; ...
2015-11-06
Here, we present an improved procedure of generating initial conditions (ICs) for climate model hindcast experiments with specified sea surface temperature and sea ice. The motivation is to minimize errors in the ICs and lead to a better evaluation of atmospheric parameterizations' performance in the hindcast mode. We apply state variables (horizontal velocities, temperature and specific humidity) from the operational analysis/reanalysis for the atmospheric initial states. Without a data assimilation system, we apply a two-step process to obtain other necessary variables to initialize both the atmospheric (e.g., aerosols and clouds) and land models (e.g., soil moisture). First, we nudge onlymore » the model horizontal velocities towards operational analysis/reanalysis values, given a 6-hour relaxation time scale, to obtain all necessary variables. Compared to the original strategy in which horizontal velocities, temperature and specific humidity are nudged, the revised approach produces a better representation of initial aerosols and cloud fields which are more consistent and closer to observations and model's preferred climatology. Second, we obtain land ICs from an offline land model simulation forced with observed precipitation, winds, and surface fluxes. This approach produces more realistic soil moisture in the land ICs. With this refined procedure, the simulated precipitation, clouds, radiation, and surface air temperature over land are improved in the Day 2 mean hindcasts. Following this procedure, we propose a “Core” integration suite which provides an easily repeatable test allowing model developers to rapidly assess the impacts of various parameterization changes on the fidelity of modelled cloud-associated processes relative to observations.« less
NASA Astrophysics Data System (ADS)
Ma, H.-Y.; Chuang, C. C.; Klein, S. A.; Lo, M.-H.; Zhang, Y.; Xie, S.; Zheng, X.; Ma, P.-L.; Zhang, Y.; Phillips, T. J.
2015-12-01
We present an improved procedure of generating initial conditions (ICs) for climate model hindcast experiments with specified sea surface temperature and sea ice. The motivation is to minimize errors in the ICs and lead to a better evaluation of atmospheric parameterizations' performance in the hindcast mode. We apply state variables (horizontal velocities, temperature, and specific humidity) from the operational analysis/reanalysis for the atmospheric initial states. Without a data assimilation system, we apply a two-step process to obtain other necessary variables to initialize both the atmospheric (e.g., aerosols and clouds) and land models (e.g., soil moisture). First, we nudge only the model horizontal velocities toward operational analysis/reanalysis values, given a 6 h relaxation time scale, to obtain all necessary variables. Compared to the original strategy in which horizontal velocities, temperature, and specific humidity are nudged, the revised approach produces a better representation of initial aerosols and cloud fields which are more consistent and closer to observations and model's preferred climatology. Second, we obtain land ICs from an off-line land model simulation forced with observed precipitation, winds, and surface fluxes. This approach produces more realistic soil moisture in the land ICs. With this refined procedure, the simulated precipitation, clouds, radiation, and surface air temperature over land are improved in the Day 2 mean hindcasts. Following this procedure, we propose a "Core" integration suite which provides an easily repeatable test allowing model developers to rapidly assess the impacts of various parameterization changes on the fidelity of modeled cloud-associated processes relative to observations.
Klingaman, Nicholas P.; Woolnough, Steven J.; Jiang, Xianan; ...
2015-04-10
Here, many theories for the Madden-Julian oscillation (MJO) focus on diabatic processes, particularly the evolution of vertical heating and moistening. Poor MJO performance in weather and climate models is often blamed on biases in these processes and their interactions with the large-scale circulation. We introduce one of the three components of a model evaluation project, which aims to connect MJO fidelity in models to their representations of several physical processes, focusing on diabatic heating and moistening. This component consists of 20 day hindcasts, initialized daily during two MJO events in winter 2009–2010. The 13 models exhibit a range of skill:more » several have accurate forecasts to 20 days lead, while others perform similarly to statistical models (8–11 days). Models that maintain the observed MJO amplitude accurately predict propagation, but not vice versa. We find no link between hindcast fidelity and the precipitation-moisture relationship, in contrast to other recent studies. There is also no relationship between models' performance and the evolution of their diabatic heating profiles with rain rate. A more robust association emerges between models' fidelity and net moistening: the highest-skill models show a clear transition from low-level moistening for light rainfall to midlevel moistening at moderate rainfall and upper level moistening for heavy rainfall. The midlevel moistening, arising from both dynamics and physics, may be most important. Accurately representing many processes may be necessary but not sufficient for capturing the MJO, which suggests that models fail to predict the MJO for a broad range of reasons and limits the possibility of finding a panacea.« less
Relationship Among Signal Fidelity, Hearing Loss, and Working Memory for Digital Noise Suppression.
Arehart, Kathryn; Souza, Pamela; Kates, James; Lunner, Thomas; Pedersen, Michael Syskind
2015-01-01
This study considered speech modified by additive babble combined with noise-suppression processing. The purpose was to determine the relative importance of the signal modifications, individual peripheral hearing loss, and individual cognitive capacity on speech intelligibility and speech quality. The participant group consisted of 31 individuals with moderate high-frequency hearing loss ranging in age from 51 to 89 years (mean = 69.6 years). Speech intelligibility and speech quality were measured using low-context sentences presented in babble at several signal-to-noise ratios. Speech stimuli were processed with a binary mask noise-suppression strategy with systematic manipulations of two parameters (error rate and attenuation values). The cumulative effects of signal modification produced by babble and signal processing were quantified using an envelope-distortion metric. Working memory capacity was assessed with a reading span test. Analysis of variance was used to determine the effects of signal processing parameters on perceptual scores. Hierarchical linear modeling was used to determine the role of degree of hearing loss and working memory capacity in individual listener response to the processed noisy speech. The model also considered improvements in envelope fidelity caused by the binary mask and the degradations to envelope caused by error and noise. The participants showed significant benefits in terms of intelligibility scores and quality ratings for noisy speech processed by the ideal binary mask noise-suppression strategy. This benefit was observed across a range of signal-to-noise ratios and persisted when up to a 30% error rate was introduced into the processing. Average intelligibility scores and average quality ratings were well predicted by an objective metric of envelope fidelity. Degree of hearing loss and working memory capacity were significant factors in explaining individual listener's intelligibility scores for binary mask processing applied to speech in babble. Degree of hearing loss and working memory capacity did not predict listeners' quality ratings. The results indicate that envelope fidelity is a primary factor in determining the combined effects of noise and binary mask processing for intelligibility and quality of speech presented in babble noise. Degree of hearing loss and working memory capacity are significant factors in explaining variability in listeners' speech intelligibility scores but not in quality ratings.
Sobriety Treatment and Recovery Teams: Implementation Fidelity and Related Outcomes.
Huebner, Ruth A; Posze, Lynn; Willauer, Tina M; Hall, Martin T
2015-01-01
Although integrated programs between child welfare and substance abuse treatment are recommended for families with co-occurring child maltreatment and substance use disorders, implementing integrated service delivery strategies with fidelity is a challenging process. This study of the first five years of the Sobriety Treatment and Recovery Team (START) program examines implementation fidelity using a model proposed by Carroll et al. (2007). The study describes the process of strengthening moderators of implementation fidelity, trends in adherence to START service delivery standards, and trends in parent and child outcomes. Qualitative and quantitative measures were used to prospectively study three START sites serving 341 families with 550 parents and 717 children. To achieve implementation fidelity to service delivery standards required a pre-service year and two full years of operation, persistent leadership, and facilitative actions that challenged the existing paradigm. Over four years of service delivery, the time from the child protective services report to completion of five drug treatment sessions was reduced by an average of 75 days. This trend was associated with an increase in parent retention, parental sobriety, and parent retention of child custody. Conclusions/Importance: Understanding the implementation processes necessary to establish complex integrated programs may support realistic allocation of resources. Although implementation fidelity is a moderator of program outcome, complex inter-agency interventions may benefit from innovative measures of fidelity that promote improvement without extensive cost and data collection burden. The implementation framework applied in this study was useful in examining implementation processes, fidelity, and related outcomes.
How much rainfall sustained a Green Sahara during the mid-Holocene?
NASA Astrophysics Data System (ADS)
Hopcroft, Peter; Valdes, Paul; Harper, Anna
2016-04-01
The present-day Sahara desert has periodically transformed to an area of lakes and vegetation during the Quaternary in response to orbitally-induced changes in the monsoon circulation. Coupled atmosphere-ocean general circulation model simulations of the mid-Holocene generally underestimate the required monsoon shift, casting doubt on the fidelity of these models. However, the climatic regime that characterised this period remains unclear. To address this, we applied an ensemble of dynamic vegetation model simulations using two different models: JULES (Joint UK Land Environment Simulator) a comprehensive land surface model, and LPJ (Lund-Potsdam-Jena model) a widely used dynamic vegetation model. The simulations are forced with a number of idealized climate scenarios, in which an observational climatology is progressively altered with imposed anomalies of precipitation and other related variables, including cloud cover and humidity. The applied anomalies are based on an ensemble of general circulation model simulations, and include seasonal variations but are spatially uniform across the region. When perturbing precipitation alone, a significant increase of at least 700mm/year is required to produce model simulations with non-negligible vegetation coverage in the Sahara region. Changes in related variables including cloud cover, surface radiation fluxes and humidity are found to be important in the models, as they modify the water balance and so affect plant growth. Including anomalies in all of these variables together reduces the precipitation change required for a Green Sahara compared to the case of increasing precipitation alone. We assess whether the precipitation changes implied by these vegetation model simulations are consistent with reconstructions for the mid-Holocene from pollen samples. Further, Earth System models predict precipitation increases that are significantly smaller than that inferred from these vegetation model simulations. Understanding this difference presents an ongoing challenge.
Enforcing elemental mass and energy balances for reduced order models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J.; Agarwal, K.; Sharma, P.
2012-01-01
Development of economically feasible gasification and carbon capture, utilization and storage (CCUS) technologies requires a variety of software tools to optimize the designs of not only the key devices involved (e., g., gasifier, CO{sub 2} adsorber) but also the entire power generation system. High-fidelity models such as Computational Fluid Dynamics (CFD) models are capable of accurately simulating the detailed flow dynamics, heat transfer, and chemistry inside the key devices. However, the integration of CFD models within steady-state process simulators, and subsequent optimization of the integrated system, still presents significant challenges due to the scale differences in both time and length,more » as well the high computational cost. A reduced order model (ROM) generated from a high-fidelity model can serve as a bridge between the models of different scales. While high-fidelity models are built upon the principles of mass, momentum, and energy conservations, ROMs are usually developed based on regression-type equations and hence their predictions may violate the mass and energy conservation laws. A high-fidelity model may also have the mass and energy balance problem if it is not tightly converged. Conservations of mass and energy are important when a ROM is integrated to a flowsheet for the process simulation of the entire chemical or power generation system, especially when recycle streams are connected to the modeled device. As a part of the Carbon Capture Simulation Initiative (CCSI) project supported by the U.S. Department of Energy, we developed a software framework for generating ROMs from CFD simulations and integrating them with Process Modeling Environments (PMEs) for system-wide optimization. This paper presents a method to correct the results of a high-fidelity model or a ROM such that the elemental mass and energy are conserved perfectly. Correction factors for the flow rates of individual species in the product streams are solved using a minimization algorithm based on Lagrangian multiplier method. Enthalpies of product streams are also modified to enforce the energy balance. The approach is illustrated for two ROMs, one based on a CFD model of an entrained-flow gasifier and the other based on the CFD model of a multiphase CO{sub 2} adsorber.« less
Application of dynamic flux balance analysis to an industrial Escherichia coli fermentation.
Meadows, Adam L; Karnik, Rahi; Lam, Harry; Forestell, Sean; Snedecor, Brad
2010-03-01
We have developed a reactor-scale model of Escherichia coli metabolism and growth in a 1000 L process for the production of a recombinant therapeutic protein. The model consists of two distinct parts: (1) a dynamic, process specific portion that describes the time evolution of 37 process variables of relevance and (2) a flux balance based, 123-reaction metabolic model of E. coli metabolism. This model combines several previously reported modeling approaches including a growth rate-dependent biomass composition, maximum growth rate objective function, and dynamic flux balancing. In addition, we introduce concentration-dependent boundary conditions of transport fluxes, dynamic maintenance demands, and a state-dependent cellular objective. This formulation was able to describe specific runs with high-fidelity over process conditions including rich media, simultaneous acetate and glucose consumption, glucose minimal media, and phosphate depleted media. Furthermore, the model accurately describes the effect of process perturbations--such as glucose overbatching and insufficient aeration--on growth, metabolism, and titer. (c) 2009 Elsevier Inc. All rights reserved.
Micromechanics Modeling of Composites Subjected to Multiaxial Progressive Damage in the Constituents
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Aboudi, Jacob; Amold, Steven M.
2010-01-01
The high-fidelity generalized method of cells composite micromechanics model is extended to include constituent-scale progressive damage via a proposed damage model. The damage model assumes that all material nonlinearity is due to damage in the form of reduced stiffness, and it uses six scalar damage variables (three for tension and three for compression) to track the damage. Damage strains are introduced that account for interaction among the strain components and that also allow the development of the damage evolution equations based on the constituent material uniaxial stress strain response. Local final-failure criteria are also proposed based on mode-specific strain energy release rates and total dissipated strain energy. The coupled micromechanics-damage model described herein is applied to a unidirectional E-glass/epoxy composite and a proprietary polymer matrix composite. Results illustrate the capability of the coupled model to capture the vastly different character of the monolithic (neat) resin matrix and the composite in response to far-field tension, compression, and shear loading.
Synthetic Air Data Estimation: A case study of model-aided estimation
NASA Astrophysics Data System (ADS)
Lie, F. Adhika Pradipta
A method for estimating airspeed, angle of attack, and sideslip without using conventional, pitot-static airdata system is presented. The method relies on measurements from GPS, an inertial measurement unit (IMU) and a low-fidelity model of the aircraft's dynamics which are fused using two, cascaded Extended Kalman Filters. In the cascaded architecture, the first filter uses information from the IMU and GPS to estimate the aircraft's absolute velocity and attitude. These estimates are used as the measurement updates for the second filter where they are fused with the aircraft dynamics model to generate estimates of airspeed, angle of attack and sideslip. Methods for dealing with the time and inter-state correlation in the measurements coming from the first filter are discussed. Simulation and flight test results of the method are presented. Simulation results using high fidelity nonlinear model show that airspeed, angle of attack, and sideslip angle estimation errors are less than 0.5 m/s, 0.1 deg, and 0.2 deg RMS, respectively. Factors that affect the accuracy including the implication and impact of using a low fidelity aircraft model are discussed. It is shown using flight tests that a single linearized aircraft model can be used in lieu of a high-fidelity, non-linear model to provide reasonably accurate estimates of airspeed (less than 2 m/s error), angle of attack (less than 3 deg error), and sideslip angle (less than 5 deg error). This performance is shown to be relatively insensitive to off-trim attitudes but very sensitive to off-trim velocity.
Li, Bo; Li, Sheng-Hao; Zhou, Huan-Qiang
2009-06-01
A systematic analysis is performed for quantum phase transitions in a two-dimensional anisotropic spin-1/2 antiferromagnetic XYX model in an external magnetic field. With the help of an innovative tensor network algorithm, we compute the fidelity per lattice site to demonstrate that the field-induced quantum phase transition is unambiguously characterized by a pinch point on the fidelity surface, marking a continuous phase transition. We also compute an entanglement estimator, defined as a ratio between the one-tangle and the sum of squared concurrences, to identify both the factorizing field and the critical point, resulting in a quantitative agreement with quantum Monte Carlo simulation. In addition, the local order parameter is "derived" from the tensor network representation of the system's ground-state wave functions.
Strengthening organizations to implement evidence-based clinical practices.
VanDeusen Lukas, Carol; Engle, Ryann L; Holmes, Sally K; Parker, Victoria A; Petzel, Robert A; Nealon Seibert, Marjorie; Shwartz, Michael; Sullivan, Jennifer L
2010-01-01
Despite recognition that implementation of evidence-based clinical practices (EBPs) usually depends on the structure and processes of the larger health care organizational context, the dynamics of implementation are not well understood. This project's aim was to deepen that understanding by implementing and evaluating an organizational model hypothesized to strengthen the ability of health care organizations to facilitate EBPs. CONCEPTUAL MODEL: The model posits that implementation of EBPs will be enhanced through the presence of three interacting components: active leadership commitment to quality, robust clinical process redesign incorporating EBPs into routine operations, and use of management structures and processes to support and align redesign. In a mixed-methods longitudinal comparative case study design, seven medical centers in one network in the Department of Veterans Affairs participated in an intervention to implement the organizational model over 3 years. The network was selected randomly from three interested in using the model. The target EBP was hand-hygiene compliance. Measures included ratings of implementation fidelity, observed hand-hygiene compliance, and factors affecting model implementation drawn from interviews. Analyses support the hypothesis that greater fidelity to the organizational model was associated with higher compliance with hand-hygiene guidelines. High-fidelity sites showed larger effect sizes for improvement in hand-hygiene compliance than lower-fidelity sites. Adherence to the organizational model was in turn affected by factors in three categories: urgency to improve, organizational environment, and improvement climate. Implementation of EBPs, particularly those that cut across multiple processes of care, is a complex process with many possibilities for failure. The results provide the basis for a refined understanding of relationships among components of the organizational model and factors in the organizational context affecting them. This understanding suggests practical lessons for future implementation efforts and contributes to theoretical understanding of the dynamics of the implementation of EBPs.
Bonfils, Celine J. W.; Santer, Benjamin D.; Phillips, Thomas J.; ...
2015-12-18
The El Niño–Southern Oscillation (ENSO) is an important driver of regional hydroclimate variability through far-reaching teleconnections. This study uses simulations performed with coupled general circulation models (CGCMs) to investigate how regional precipitation in the twenty-first century may be affected by changes in both ENSO-driven precipitation variability and slowly evolving mean rainfall. First, a dominant, time-invariant pattern of canonical ENSO variability (cENSO) is identified in observed SST data. Next, the fidelity with which 33 state-of-the-art CGCMs represent the spatial structure and temporal variability of this pattern (as well as its associated precipitation responses) is evaluated in simulations of twentieth-century climate change.more » Possible changes in both the temporal variability of this pattern and its associated precipitation teleconnections are investigated in twenty-first-century climate projections. Models with better representation of the observed structure of the cENSO pattern produce winter rainfall teleconnection patterns that are in better accord with twentieth-century observations and more stationary during the twenty-first century. Finally, the model-predicted twenty-first-century rainfall response to cENSO is decomposed into the sum of three terms: 1) the twenty-first-century change in the mean state of precipitation, 2) the historical precipitation response to the cENSO pattern, and 3) a future enhancement in the rainfall response to cENSO, which amplifies rainfall extremes. Lastly, by examining the three terms jointly, this conceptual framework allows the identification of regions likely to experience future rainfall anomalies that are without precedent in the current climate.« less
75 FR 13147 - Integrity Life Insurance Company, et al.;
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-18
... Management, at (202) 551-6795. SUPPLEMENTARY INFORMATION: The following is a summary of the application. The... Insurance Products Fund III, Variable Insurance Products Fund II, nor Fidelity Management and Research... both--of stocks or both--of domestic and domestic and foreign issuers foreign issuers using fundamental...
78 FR 38413 - American Family Life Insurance Company, et al.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-26
...] American Family Life Insurance Company, et al. June 20, 2013. AGENCY: Securities and Exchange Commission...: American Family Life Insurance Company (the ``Company''), American Family Variable Account I (the ``Life... Products Fund (``Fidelity Fund''), currently held by the Life Account and the Annuity Account (each an...
Technology Implementation in Education--Identifying Barriers to Fidelity
ERIC Educational Resources Information Center
Dennis, William J.; Johnson, Daniel L.; Monroe, Arla K.
2012-01-01
This report describes a problem-based learning project focused on determining the barriers to the implementation of technological innovations. Research findings offered evidence that properly executed technology implementation is an instructional variable related to student achievement; yet, school district leaders are faced with the problem of…
Technology Implementation in Education--Identifying Barriers to Fidelity
ERIC Educational Resources Information Center
Monroe, Arla K.; Dennis, William J.; Johnson, Daniel L.
2012-01-01
This report describes a problem-based learning project focused on determining the barriers to the implementation of technological innovations. that properly executed technology implementation is an instructional variable related to student achievement; yet, school district leaders are faced with the problem of recognizing and identifying the…
The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models
NASA Technical Reports Server (NTRS)
Penn, John M.
2016-01-01
The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.
Modeling Urban Scenarios & Experiments: Fort Indiantown Gap Data Collections Summary and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archer, Daniel E.; Bandstra, Mark S.; Davidson, Gregory G.
This report summarizes experimental radiation detector, contextual sensor, weather, and global positioning system (GPS) data collected to inform and validate a comprehensive, operational radiation transport modeling framework to evaluate radiation detector system and algorithm performance. This framework will be used to study the influence of systematic effects (such as geometry, background activity, background variability, environmental shielding, etc.) on detector responses and algorithm performance using synthetic time series data. This work consists of performing data collection campaigns at a canonical, controlled environment for complete radiological characterization to help construct and benchmark a high-fidelity model with quantified system geometries, detector response functions,more » and source terms for background and threat objects. This data also provides an archival, benchmark dataset that can be used by the radiation detection community. The data reported here spans four data collection campaigns conducted between May 2015 and September 2016.« less
ERIC Educational Resources Information Center
Shapley, Kelly S.; Sheehan, Daniel; Maloney, Catherine; Caranikas-Walker, Fanny
2010-01-01
In a pilot study of the Technology Immersion model, high-need middle schools were "immersed" in technology by providing a laptop for each student and teacher, wireless Internet access, curricular and assessment resources, professional development, and technical and pedagogical support. This article examines the fidelity of model…
Nursing Simulation: A Review of the Past 40 Years
ERIC Educational Resources Information Center
Nehring, Wendy M.; Lashley, Felissa R.
2009-01-01
Simulation, in its many forms, has been a part of nursing education and practice for many years. The use of games, computer-assisted instruction, standardized patients, virtual reality, and low-fidelity to high-fidelity mannequins have appeared in the past 40 years, whereas anatomical models, partial task trainers, and role playing were used…
Features that contribute to the usefulness of low-fidelity models for surgical skills training.
Langebæk, R; Berendt, M; Pedersen, L T; Jensen, A L; Eika, B
2012-04-07
For practical, ethical and economic reasons, veterinary surgical education is becoming increasingly dependent on models for training. The limited availability and high cost of commercially produced surgical models has increased the need for useful, low-cost alternatives. For this reason, a number of models were developed to be used in a basic surgical skills course for veterinary students. The models were low fidelity, having limited resemblance to real animals. The aim of the present study was to describe the students' learning experience with the models and to report their perception of the usefulness of the models in applying the trained skills to live animal surgery. One hundred and forty-six veterinary fourth-year students evaluated the models on a four-point Likert scale. Of these, 26 additionally participated in individual semistructured interviews. The survey results showed that 75 per cent of the students rated the models 'useful'/'very useful'. Interviews revealed that tactile, dimensional, visual, situational and emotional features are important to students' perception of a successful translation of skills from models to live animal. In conclusion, low-fidelity models are useful educational tools in preparation for live animal surgery. However, there are specific features to take into account when developing models in order for students to perceive them as useful.
Spawning site fidelity of wild and hatchery lake trout (Salvelinus namaycush) in northern Lake Huron
Binder, Thomas; Riley, Stephen C.; Holbrook, Christopher; Hansen, Michael J.; Bergstedt, Roger A.; Bronte, Charles R.; He, Ji; Krueger, Charles C.
2016-01-01
Fidelity to high-quality spawning sites helps ensure that adults repeatedly spawn at sites that maximize reproductive success. Fidelity is also an important behavioural characteristic to consider when hatchery-reared individuals are stocked for species restoration, because artificial rearing environments may interfere with cues that guide appropriate spawning site selection. Acoustic telemetry was used in conjunction with Cormack–Jolly–Seber capture–recapture models to compare degree of spawning site fidelity of wild and hatchery-reared lake trout (Salvelinus namaycush) in northern Lake Huron. Annual survival was estimated to be between 77% and 81% and did not differ among wild and hatchery males and females. Site fidelity estimates were high in both wild and hatchery-reared lake trout (ranging from 0.78 to 0.94, depending on group and time filter), but were slightly lower in hatchery-reared fish than in wild fish. The ecological implication of the small difference in site fidelity between wild and hatchery-reared lake trout is unclear, but similarities in estimates suggest that many hatchery-reared fish use similar spawning sites to wild fish and that most return to those sites annually for spawning.
Gray, Heewon Lee; Tipton, Elizabeth; Contento, Isobel; Koch, Pamela
2016-01-01
Childhood obesity is a complex, worldwide problem. Significant resources are invested in its prevention, and high-quality evaluations of these efforts are important. Conducting trials in school settings is complicated, making process evaluations useful for explaining results. Intervention fidelity has been demonstrated to influence outcomes, but others have suggested that other aspects of implementation, including participant responsiveness, should be examined more systematically. During Food, Health & Choices (FHC), a school-based childhood obesity prevention trial designed to test a curriculum and wellness policy taught by trained FHC instructors to fifth grade students in 20 schools during 2012–2013, we assessed relationships among facilitator behaviors (i.e., fidelity and teacher interest), participant behaviors (i.e., student satisfaction and recall), and program outcomes (i.e., energy balance-related behaviors) using hierarchical linear models, controlling for student, class, and school characteristics. We found positive relationships between student satisfaction and recall and program outcomes, but not fidelity and program outcomes. We also found relationships between teacher interest and fidelity when teachers participated in implementation. Finally, we found a significant interaction between fidelity and satisfaction on behavioral outcomes. These findings suggest that individual students in the same class responded differently to the same intervention. They also suggest the importance of teacher buy-in for successful intervention implementation. Future studies should examine how facilitator and participant behaviors together are related to both outcomes and implementation. Assessing multiple aspects of implementation using models that account for contextual influences on behavioral outcomes is an important step forward for prevention intervention process evaluations. PMID:27921200
Burgermaster, Marissa; Gray, Heewon Lee; Tipton, Elizabeth; Contento, Isobel; Koch, Pamela
2017-01-01
Childhood obesity is a complex, worldwide problem. Significant resources are invested in its prevention, and high-quality evaluations of these efforts are important. Conducting trials in school settings is complicated, making process evaluations useful for explaining results. Intervention fidelity has been demonstrated to influence outcomes, but others have suggested that other aspects of implementation, including participant responsiveness, should be examined more systematically. During Food, Health & Choices (FHC), a school-based childhood obesity prevention trial designed to test a curriculum and wellness policy taught by trained FHC instructors to fifth grade students in 20 schools during 2012-2013, we assessed relationships among facilitator behaviors (i.e., fidelity and teacher interest); participant behaviors (i.e., student satisfaction and recall); and program outcomes (i.e., energy balance-related behaviors) using hierarchical linear models, controlling for student, class, and school characteristics. We found positive relationships between student satisfaction and recall and program outcomes, but not fidelity and program outcomes. We also found relationships between teacher interest and fidelity when teachers participated in implementation. Finally, we found a significant interaction between fidelity and satisfaction on behavioral outcomes. These findings suggest that individual students in the same class responded differently to the same intervention. They also suggest the importance of teacher buy-in for successful intervention implementation. Future studies should examine how facilitator and participant behaviors together are related to both outcomes and implementation. Assessing multiple aspects of implementation using models that account for contextual influences on behavioral outcomes is an important step forward for prevention intervention process evaluations.
Robitaille, Arnaud; Perron, Roger; Germain, Jean-François; Tanoubi, Issam; Georgescu, Mihai
2015-04-01
Transcutaneous cardiac pacing (TCP) is a potentially lifesaving technique that is part of the recommended treatment for symptomatic bradycardia. Transcutaneous cardiac pacing however is used uncommonly, and its successful application is not straightforward. Simulation could, therefore, play an important role in the teaching and assessment of TCP competence. However, even the highest-fidelity mannequins available on the market have important shortcomings, which limit the potential of simulation. Six criteria defining clinical competency in TCP were established and used as a starting point in the creation of an improved TCP simulator. The goal was a model that could be used to assess experienced clinicians, an objective that justifies the additional effort required by the increased fidelity. The proposed 2-mannequin model (TMM) combines a highly modified Human Patient Simulator with a SimMan 3G, the latter being used solely to provide the electrocardiography (ECG) tracing. The TMM improves the potential of simulation to assess experienced clinicians (1) by reproducing key features of TCP, like using the same multifunctional pacing electrodes used clinically, allowing dual ECG monitoring, and responding with upper body twitching when stimulated, but equally importantly (2) by reproducing key pitfalls of the technique, like allowing pacing electrode misplacement and reproducing false signs of ventricular capture, commonly, but erroneously, used clinically to establish that effective pacing has been achieved (like body twitching, electrical artifact on the ECG, and electrical capture without ventricular capture). The proposed TMM uses a novel combination of 2 high-fidelity mannequins to improve TCP simulation until upgraded mannequins become commercially available.
Effects of sea ice on winter site fidelity of Pacific common eiders (Somateria mollissima v-nigrum)
Petersen, Margaret R.; Douglas, David C.; Wilson, Heather M.; McCloskey, Sarah E.
2012-01-01
In northern marine habitats, the presence or absence of sea ice results in variability in the distribution of many species and the quality and availability of pelagic winter habitat. To understand the effects of ice on intra- and inter-annual winter site fidelity and movements in a northern sea-duck species, we marked 25 adult Pacific Common Eiders (Somateria mollissima v-nigrum) on their nesting area at Cape Espenberg, Alaska, with satellite transmitters and monitored their movements to their wintering areas in the northern Bering Sea for a 2-year period. We examined changes in winter fidelity in relation to home-range characteristics and ice. Characteristics of polynyas (areas with persistent open water during winter) varied substantially and likely had an effect on the size of winter ranges and movements within polynyas. Movements within polynyas were correlated with changes in weather that affected ice conditions. Ninety-five percent of individuals were found within their 95% utilization distribution (UD) of the previous year, and 90% were found within their 50% UD. Spatial distributions of winter locations between years changed for 32% of the individuals; however, we do not consider these subtle movements biologically significant. Although ice conditions varied between polynyas within and between years, the Common Eiders monitored in our study showed a high degree of fidelity to their winter areas. This observation is counterintuitive, given the requirement that resources are predictable for site fidelity to occur; however, ice may not have been severe enough to restrict access to other resources and, subsequently, force birds to move.
Sebire, Simon J; Kesten, Joanna M; Edwards, Mark J; May, Thomas; Banfield, Kathryn; Tomkinson, Keeley; Blair, Peter S; Bird, Emma L; Powell, Jane E; Jago, Russell
2016-05-01
To report the theory-based process evaluation of the Bristol Girls' Dance Project, a cluster-randomised controlled trial to increase adolescent girls' physical activity. A mixed-method process evaluation of the intervention's self-determination theory components comprising lesson observations, post-intervention interviews and focus groups. Four intervention dance lessons per dance instructor were observed, audio recorded and rated to estimate the use of need-supportive teaching strategies. Intervention participants (n = 281) reported their dance instructors' provision of autonomy-support. Semi-structured interviews with the dance instructors (n = 10) explored fidelity to the theory and focus groups were conducted with participants (n = 59) in each school to explore their receipt of the intervention and views on the dance instructors' motivating style. Although instructors accepted the theory-based approach, intervention fidelity was variable. Relatedness support was the most commonly observed need-supportive teaching behaviour, provision of structure was moderate and autonomy-support was comparatively low. The qualitative findings identified how instructors supported competence and developed trusting relationships with participants. Fidelity was challenged where autonomy provision was limited to option choices rather than input into the pace or direction of lessons and where controlling teaching styles were adopted, often to manage disruptive behaviour. The successes and challenges to achieving theoretical fidelity in the Bristol Girls' Dance Project may help explain the intervention effects and can more broadly inform the design of theory-based complex interventions aimed at increasing young people's physical activity in after-school settings.
Sebire, Simon J.; Kesten, Joanna M.; Edwards, Mark J.; May, Thomas; Banfield, Kathryn; Tomkinson, Keeley; Blair, Peter S.; Bird, Emma L.; Powell, Jane E.; Jago, Russell
2016-01-01
Objectives To report the theory-based process evaluation of the Bristol Girls' Dance Project, a cluster-randomised controlled trial to increase adolescent girls' physical activity. Design A mixed-method process evaluation of the intervention's self-determination theory components comprising lesson observations, post-intervention interviews and focus groups. Method Four intervention dance lessons per dance instructor were observed, audio recorded and rated to estimate the use of need-supportive teaching strategies. Intervention participants (n = 281) reported their dance instructors' provision of autonomy-support. Semi-structured interviews with the dance instructors (n = 10) explored fidelity to the theory and focus groups were conducted with participants (n = 59) in each school to explore their receipt of the intervention and views on the dance instructors' motivating style. Results Although instructors accepted the theory-based approach, intervention fidelity was variable. Relatedness support was the most commonly observed need-supportive teaching behaviour, provision of structure was moderate and autonomy-support was comparatively low. The qualitative findings identified how instructors supported competence and developed trusting relationships with participants. Fidelity was challenged where autonomy provision was limited to option choices rather than input into the pace or direction of lessons and where controlling teaching styles were adopted, often to manage disruptive behaviour. Conclusion The successes and challenges to achieving theoretical fidelity in the Bristol Girls' Dance Project may help explain the intervention effects and can more broadly inform the design of theory-based complex interventions aimed at increasing young people's physical activity in after-school settings. PMID:27175102
Muntinga, Maaike E; Van Leeuwen, Karen M; Schellevis, François G; Nijpels, Giel; Jansen, Aaltje P D
2015-01-22
Implementation fidelity, the degree to which a care program is implemented as intended, can influence program impact. Since results of trials that aim to implement comprehensive care programs for frail, older people have been conflicting, assessing implementation fidelity alongside these trials is essential to differentiate between flaws inherent to the program and implementation issues. This study demonstrates how a theory-based assessment of fidelity can increase insight in the implementation process of a complex intervention in primary elderly care. The Geriatric Care Model was implemented among 35 primary care practices in the Netherlands. During home visits, practice nurses conducted a comprehensive geriatric assessment and wrote a tailored care plan. Multidisciplinary team consultations were organized with the aim to enhance the coordination between professionals caring for a single patient with complex needs. To assess fidelity, we identified 5 key intervention components and formulated corresponding research questions using Carroll's framework for fidelity. Adherence (coverage, frequency, duration, content) was assessed per intervention component during and at the end of the intervention period. Two moderating factors (participant responsiveness and facilitation strategies) were assessed at the end of the intervention. Adherence to the geriatric assessments and care plans was high, but decreased over time. Adherence to multidisciplinary consultations was initially poor, but increased over time. We found that individual differences in adherence between practice nurses and primary care physicians were moderate, while differences in participant responsiveness (satisfaction, involvement) were more distinct. Nurses deviated from protocol due to contextual factors and personal work routines. Adherence to the Geriatric Care Model was high for most of the essential intervention components. Study limitations include the limited number of assessed moderating factors. We argue that a longitudinal investigation of adherence per intervention component is essential for a complete understanding of the implementation process, but that such investigations may be complicated by practical and methodological challenges. The Netherlands National Trial Register (NTR). 2160 .
A quasi-3D wire approach to model pulmonary airflow in human airways.
Kannan, Ravishekar; Chen, Z J; Singh, Narender; Przekwas, Andrzej; Delvadia, Renishkumar; Tian, Geng; Walenga, Ross
2017-07-01
The models used for modeling the airflow in the human airways are either 0-dimensional compartmental or full 3-dimensional (3D) computational fluid dynamics (CFD) models. In the former, airways are treated as compartments, and the computations are performed with several assumptions, thereby generating a low-fidelity solution. The CFD method displays extremely high fidelity since the solution is obtained by solving the conservation equations in a physiologically consistent geometry. However, CFD models (1) require millions of degrees of freedom to accurately describe the geometry and to reduce the discretization errors, (2) have convergence problems, and (3) require several days to simulate a few breathing cycles. In this paper, we present a novel, fast-running, and robust quasi-3D wire model for modeling the airflow in the human lung airway. The wire mesh is obtained by contracting the high-fidelity lung airway surface mesh to a system of connected wires, with well-defined radii. The conservation equations are then solved in each wire. These wire meshes have around O(1000) degrees of freedom and hence are 3000 to 25 000 times faster than their CFD counterparts. The 3D spatial nature is also preserved since these wires are contracted out of the actual lung STL surface. The pressure readings between the 2 approaches showed minor difference (maximum error = 15%). In general, this formulation is fast and robust, allows geometric changes, and delivers high-fidelity solutions. Hence, this approach has great potential for more complicated problems including modeling of constricted/diseased lung sections and for calibrating the lung flow resistances through parameter inversion. Copyright © 2016 John Wiley & Sons, Ltd.
Hirschvogel, Marc; Bassilious, Marina; Jagschies, Lasse; Wildhirt, Stephen M; Gee, Michael W
2016-10-15
A model for patient-specific cardiac mechanics simulation is introduced, incorporating a 3-dimensional finite element model of the ventricular part of the heart, which is coupled to a reduced-order 0-dimensional closed-loop vascular system, heart valve, and atrial chamber model. The ventricles are modeled by a nonlinear orthotropic passive material law. The electrical activation is mimicked by a prescribed parameterized active stress acting along a generic muscle fiber orientation. Our activation function is constructed such that the start of ventricular contraction and relaxation as well as the active stress curve's slope are parameterized. The imaging-based patient-specific ventricular model is prestressed to low end-diastolic pressure to account for the imaged, stressed configuration. Visco-elastic Robin boundary conditions are applied to the heart base and the epicardium to account for the embedding surrounding. We treat the 3D solid-0D fluid interaction as a strongly coupled monolithic problem, which is consistently linearized with respect to 3D solid and 0D fluid model variables to allow for a Newton-type solution procedure. The resulting coupled linear system of equations is solved iteratively in every Newton step using 2 × 2 physics-based block preconditioning. Furthermore, we present novel efficient strategies for calibrating active contractile and vascular resistance parameters to experimental left ventricular pressure and stroke volume data gained in porcine experiments. Two exemplary states of cardiovascular condition are considered, namely, after application of vasodilatory beta blockers (BETA) and after injection of vasoconstrictive phenylephrine (PHEN). The parameter calibration to the specific individual and cardiovascular state at hand is performed using a 2-stage nonlinear multilevel method that uses a low-fidelity heart model to compute a parameter correction for the high-fidelity model optimization problem. We discuss 2 different low-fidelity model choices with respect to their ability to augment the parameter optimization. Because the periodic state conditions on the model (active stress, vascular pressures, and fluxes) are a priori unknown and also dependent on the parameters to be calibrated (and vice versa), we perform parameter calibration and periodic state condition estimation simultaneously. After a couple of heart beats, the calibration algorithm converges to a settled, periodic state because of conservation of blood volume within the closed-loop circulatory system. The proposed model and multilevel calibration method are cost-efficient and allow for an efficient determination of a patient-specific in silico heart model that reproduces physiological observations very well. Such an individual and state accurate model is an important predictive tool in intervention planning, assist device engineering and other medical applications. Copyright © 2016 John Wiley & Sons, Ltd.
Microwave fidelity studies by varying antenna coupling
NASA Astrophysics Data System (ADS)
Köber, B.; Kuhl, U.; Stöckmann, H.-J.; Gorin, T.; Savin, D. V.; Seligman, T. H.
2010-09-01
The fidelity decay in a microwave billiard is considered, where the coupling to an attached antenna is varied. The resulting quantity, coupling fidelity, is experimentally studied for three different terminators of the varied antenna: a hard-wall reflection, an open wall reflection, and a 50Ω load, corresponding to a totally open channel. The model description in terms of an effective Hamiltonian with a complex coupling constant is given. Quantitative agreement is found with the theory obtained from a modified VWZ approach [J. J. M. Verbaarschot , Phys. Rep. 129, 367 (1985)10.1016/0370-1573(85)90070-5].
Simulation System Fidelity Assessment at the Vertical Motion Simulator
NASA Technical Reports Server (NTRS)
Beard, Steven D.; Reardon, Scott E.; Tobias, Eric L.; Aponso, Bimal L.
2013-01-01
Fidelity is a word that is often used but rarely understood when talking about groundbased simulation. Assessing the cueing fidelity of a ground based flight simulator requires a comparison to actual flight data either directly or indirectly. Two experiments were conducted at the Vertical Motion Simulator using the GenHel UH-60A Black Hawk helicopter math model that was directly compared to flight data. Prior to the experiment the simulator s motion and visual system frequency responses were measured, the aircraft math model was adjusted to account for the simulator motion system delays, and the motion system gains and washouts were tuned for the individual tasks. The tuned motion system fidelity was then assessed against the modified Sinacori criteria. The first experiments showed similar handling qualities ratings (HQRs) to actual flight for a bob-up and sidestep maneuvers. The second experiment showed equivalent HQRs between flight and simulation for the ADS33 slalom maneuver for the two pilot participants. The ADS33 vertical maneuver HQRs were mixed with one pilot rating the flight and simulation the same while the second pilot rated the simulation worse. In addition to recording HQRs on the second experiment, an experimental Simulation Fidelity Rating (SFR) scale developed by the University of Liverpool was tested for applicability to engineering simulators. A discussion of the SFR scale for use on the Vertical Motion Simulator is included in this paper.
Adaptation by Design: A Context-Sensitive, Dialogic Approach to Interventions
ERIC Educational Resources Information Center
Kirshner, Ben; Polman, Joseph L.
2013-01-01
Applied researchers, whether working with the framework of design-based research or intervention science, face a similar implementation challenge: practitioners who enact their programs typically do so in varied, context-specific ways. Although this variability is often seen as a problem for those who privilege fidelity and standardization, we…
NASA Astrophysics Data System (ADS)
Abdo Yassin, Fuad; Wheater, Howard; Razavi, Saman; Sapriza, Gonzalo; Davison, Bruce; Pietroniro, Alain
2015-04-01
The credible identification of vertical and horizontal hydrological components and their associated parameters is very challenging (if not impossible) by only constraining the model to streamflow data, especially in regions where the vertical processes significantly dominate the horizontal processes. The prairie areas of the Saskatchewan River basin, a major water system in Canada, demonstrate such behavior, where the hydrologic connectivity and vertical fluxes are mainly controlled by the amount of surface and sub-surface water storages. In this study, we develop a framework for distributed hydrologic model identification and calibration that jointly constrains the model response (i.e., streamflows) as well as a set of model state variables (i.e., water storages) to observations. This framework is set up in the form of multi-objective optimization, where multiple performance criteria are defined and used to simultaneously evaluate the fidelity of the model to streamflow observations and observed (estimated) changes of water storage in the gridded landscape over daily and monthly time scales. The time series of estimated changes in total water storage (including soil, canopy, snow and pond storages) used in this study were derived from an experimental study enhanced by the information obtained from the GRACE satellite. We test this framework on the calibration of a Land Surface Scheme-Hydrology model, called MESH (Modélisation Environmentale Communautaire - Surface and Hydrology), for the Saskatchewan River basin. Pareto Archived Dynamically Dimensioned Search (PA-DDS) is used as the multi-objective optimization engine. The significance of using the developed framework is demonstrated in comparison with the results obtained through a conventional calibration approach to streamflow observations. The approach of incorporating water storage data into the model identification process can more potentially constrain the posterior parameter space, more comprehensively evaluate the model fidelity, and yield more credible predictions.
Bartoli, Carlo R.; Rogers, Benjamin D.; Ionan, Constantine E.; Koenig, Steven C.; Pantalos, George M.
2013-01-01
OBJECTIVE Counterpulsation with an intraaortic balloon pump (IABP) has not achieved the same successes or clinical use in pediatric patients as in adults. In a pediatric animal model, IABP efficacy was investigated to determine whether IABP timing with a high-fidelity blood pressure signal may improve counterpulsation therapy versus a low-fidelity signal. METHODS In Yorkshire piglets (n=19, 13.0±0.5 kg) with coronary ligation-induced acute ischemic left ventricular failure, pediatric IABPs (5 or 7cc) were placed in the descending thoracic aorta. Inflation and deflation were timed with traditional criteria from low-fidelity (fluid-filled) and high-fidelity (micromanometer) blood pressure signals during 1:1 support. Aortic, carotid, and coronary hemodynamics were measured with pressure and flow transducers. Myocardial oxygen consumption was calculated from coronary sinus and arterial blood samples. Left ventricular myocardial blood flow and end-organ blood flow were measured with microspheres. RESULTS Despite significant suprasystolic diastolic augmentation and afterload reduction at heart rates of 105±3bmp, left ventricular myocardial blood flow, myocardial oxygen consumption, the myocardial oxygen supply/demand relationship, cardiac output, and end-organ blood flow did not change. Statistically significant end-diastolic coronary, carotid, and aortic flow reversal occurred with IABP deflation. Inflation and deflation timed with a high-fidelity versus low-fidelity signal did not attenuate systemic flow reversal or improve the myocardial oxygen supply/demand relationship. CONCLUSIONS Systemic end-diastolic flow reversal limited counterpulsation efficacy in a pediatric model of acute left ventricular failure. Adjustment of IABP inflation and deflation timing with traditional criteria and a high-fidelity blood pressure waveform did not improve IABP efficacy or attenuate flow reversal. End-diastolic flow reversal may limit the efficacy of IABP counterpulsation therapy in pediatric patients with traditional timing criteria. Investigation of alternative deflation timing strategies is warranted. PMID:24139614
Bartoli, Carlo R; Rogers, Benjamin D; Ionan, Constantine E; Pantalos, George M
2014-05-01
Counterpulsation with an intra-aortic balloon pump (IABP) has not achieved the same success or clinical use in pediatric patients as in adults. In a pediatric animal model, IABP efficacy was investigated to determine whether IABP timing with a high-fidelity blood pressure signal may improve counterpulsation therapy versus a low-fidelity signal. In Yorkshire piglets (n = 19; weight, 13.0 ± 0.5 kg) with coronary ligation-induced acute ischemic left ventricular failure, pediatric IABPs (5 or 7 mL) were placed in the descending thoracic aorta. Inflation and deflation were timed with traditional criteria from low-fidelity (fluid-filled) and high-fidelity (micromanometer) blood pressure signals during 1:1 support. Aortic, carotid, and coronary hemodynamics were measured with pressure and flow transducers. Myocardial oxygen consumption was calculated from coronary sinus and arterial blood samples. Left ventricular myocardial blood flow and end-organ blood flow were measured with microspheres. Despite significant suprasystolic diastolic augmentation and afterload reduction at heart rates of 105 ± 3 beats per minute, left ventricular myocardial blood flow, myocardial oxygen consumption, the myocardial oxygen supply/demand relationship, cardiac output, and end-organ blood flow did not change. Statistically significant end-diastolic coronary, carotid, and aortic flow reversal occurred with IABP deflation. Inflation and deflation timed with a high-fidelity versus low-fidelity signal did not attenuate systemic flow reversal or improve the myocardial oxygen supply/demand relationship. Systemic end-diastolic flow reversal limited counterpulsation efficacy in a pediatric model of acute left ventricular failure. Adjustment of IABP inflation and deflation timing with traditional criteria and a high-fidelity blood pressure waveform did not improve IABP efficacy or attenuate flow reversal. End-diastolic flow reversal may limit the efficacy of IABP counterpulsation therapy in pediatric patients with traditional timing criteria. Investigation of alternative deflation timing strategies is warranted. Copyright © 2014 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.
Hayden, Todd A.; Binder, Thomas; Holbrook, Christopher; Vandergoot, Christopher; Fielder, David G.; Cooke, Steven J.; Dettmers, John M.; Krueger, Charles C.
2018-01-01
Fidelity to spawning habitats can maximise reproductive success of fish by synchronising movements to sites of previous recruitment. To determine the role of reproductive fidelity in structuring walleye Sander vitreus populations in the Laurentian Great Lakes, we used acoustic telemetry combined with Cormack–Jolly–Seber capture–recapture models to estimate spawning site fidelity and apparent annual survival for the Tittabawassee River in Lake Huron and Maumee River in Lake Erie. Walleye in spawning condition were tagged from the Tittabawassee River in Lake Huron and Maumee River in Lake Erie in 2011–2012. Site fidelity and apparent annual survival were estimated from return of individuals to the stream where tagged. Site fidelity estimates were higher in the Tittabawassee River (95%) than the Maumee River (70%) and were not related to sex or fish length at tagging. Apparent annual survival of walleye tagged in the Tittabawassee did not differ among spawning seasons but was higher for female than male walleye and decreased linearly as fish length increased. Apparent annual survival of walleye tagged in the Maumee River did not differ among spawning seasons but was higher for female walleye than male walleye and increased linearly as fish length increased. Greater fidelity of walleye tagged in the Tittabawassee River than walleye tagged in the Maumee River may be related to the close proximity to the Maumee River of other spawning aggregations and multiple spawning sites in Lake Erie. As spawning site fidelity increases, management actions to conserve population structure require an increasing focus on individual stocks.
Fidelity Failures in Brief Strategic Family Therapy for Adolescent Drug Abuse: A Clinical Analysis.
Lebensohn-Chialvo, Florencia; Rohrbaugh, Michael J; Hasler, Brant P
2018-04-30
As evidence-based family treatments for adolescent substance use and conduct problems gain traction, cutting edge research moves beyond randomized efficacy trials to address questions such as how these treatments work and how best to disseminate them to community settings. A key factor in effective dissemination is treatment fidelity, which refers to implementing an intervention in a manner consistent with an established manual. While most fidelity research is quantitative, this study offers a qualitative clinical analysis of fidelity failures in a large, multisite effectiveness trial of Brief Strategic Family Therapy (BSFT) for adolescent drug abuse, where BSFT developers trained community therapists to administer this intervention in their own agencies. Using case notes and video recordings of therapy sessions, an independent expert panel first rated 103 cases on quantitative fidelity scales grounded in the BSFT manual and the broader structural-strategic framework that informs BSFT intervention. Because fidelity was generally low, the panel reviewed all cases qualitatively to identify emergent types or categories of fidelity failure. Ten categories of failures emerged, characterized by therapist omissions (e.g., failure to engage key family members, failure to think in threes) and commissions (e.g., off-model, nonsystemic formulations/interventions). Of these, "failure to think in threes" appeared basic and particularly problematic, reflecting the central place of this idea in structural theory and therapy. Although subject to possible bias, our observations highlight likely stumbling blocks in exporting a complex family treatment like BSFT to community settings. These findings also underscore the importance of treatment fidelity in family therapy research. © 2018 Family Process Institute.
Low-fidelity bench models for basic surgical skills training during undergraduate medical education.
Denadai, Rafael; Saad-Hossne, Rogério; Todelo, Andréia Padilha; Kirylko, Larissa; Souto, Luís Ricardo Martinhão
2014-01-01
It is remarkable the reduction in the number of medical students choosing general surgery as a career. In this context, new possibilities in the field of surgical education should be developed to combat this lack of interest. In this study, a program of surgical training based on learning with models of low-fidelity bench is designed as a complementary alternative to the various methodologies in the teaching of basic surgical skills during medical education, and to develop personal interests in career choice.
A dictionary learning approach for Poisson image deblurring.
Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong
2013-07-01
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.
Multi-Fidelity Uncertainty Propagation for Cardiovascular Modeling
NASA Astrophysics Data System (ADS)
Fleeter, Casey; Geraci, Gianluca; Schiavazzi, Daniele; Kahn, Andrew; Marsden, Alison
2017-11-01
Hemodynamic models are successfully employed in the diagnosis and treatment of cardiovascular disease with increasing frequency. However, their widespread adoption is hindered by our inability to account for uncertainty stemming from multiple sources, including boundary conditions, vessel material properties, and model geometry. In this study, we propose a stochastic framework which leverages three cardiovascular model fidelities: 3D, 1D and 0D models. 3D models are generated from patient-specific medical imaging (CT and MRI) of aortic and coronary anatomies using the SimVascular open-source platform, with fluid structure interaction simulations and Windkessel boundary conditions. 1D models consist of a simplified geometry automatically extracted from the 3D model, while 0D models are obtained from equivalent circuit representations of blood flow in deformable vessels. Multi-level and multi-fidelity estimators from Sandia's open-source DAKOTA toolkit are leveraged to reduce the variance in our estimated output quantities of interest while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for a variety of output quantities of interest, including global and local hemodynamic indicators. Sandia National Labs is a multimission laboratory managed and operated by NTESS, LLC, for the U.S. DOE under contract DE-NA0003525. Funding for this project provided by NIH-NIBIB R01 EB018302.
Betzler, Benjamin R.; Chandler, David; Davidson, Eva E.; ...
2017-05-08
A high-fidelity model of the High Flux Isotope Reactor (HFIR) with a low-enriched uranium (LEU) fuel design and a representative experiment loading has been developed to serve as a new reference model for LEU conversion studies. With the exception of the fuel elements, this HFIR LEU model is completely consistent with the current highly enriched uranium HFIR model. Results obtained with the new LEU model provide a baseline for analysis of alternate LEU fuel designs and further optimization studies. The newly developed HFIR LEU model has an explicit representation of the HFIR-specific involute fuel plate geometry, including the within-plate fuelmore » meat contouring, and a detailed geometry model of the fuel element side plates. Such high-fidelity models are necessary to accurately account for the self-shielding from 238U and the depletion of absorber materials present in the side plates. In addition, a method was developed to account for fuel swelling in the high-density LEU fuel plates during the depletion simulation. In conclusion, calculated time-dependent metrics for the HFIR LEU model include fission rate and cumulative fission density distributions, flux and reaction rates for relevant experiment locations, point kinetics data, and reactivity coefficients.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Betzler, Benjamin R.; Chandler, David; Davidson, Eva E.
A high-fidelity model of the High Flux Isotope Reactor (HFIR) with a low-enriched uranium (LEU) fuel design and a representative experiment loading has been developed to serve as a new reference model for LEU conversion studies. With the exception of the fuel elements, this HFIR LEU model is completely consistent with the current highly enriched uranium HFIR model. Results obtained with the new LEU model provide a baseline for analysis of alternate LEU fuel designs and further optimization studies. The newly developed HFIR LEU model has an explicit representation of the HFIR-specific involute fuel plate geometry, including the within-plate fuelmore » meat contouring, and a detailed geometry model of the fuel element side plates. Such high-fidelity models are necessary to accurately account for the self-shielding from 238U and the depletion of absorber materials present in the side plates. In addition, a method was developed to account for fuel swelling in the high-density LEU fuel plates during the depletion simulation. In conclusion, calculated time-dependent metrics for the HFIR LEU model include fission rate and cumulative fission density distributions, flux and reaction rates for relevant experiment locations, point kinetics data, and reactivity coefficients.« less
High-Fidelity Micromechanics Model Developed for the Response of Multiphase Materials
NASA Technical Reports Server (NTRS)
Aboudi, Jacob; Pindera, Marek-Jerzy; Arnold, Steven M.
2002-01-01
A new high-fidelity micromechanics model has been developed under funding from the NASA Glenn Research Center for predicting the response of multiphase materials with arbitrary periodic microstructures. The model's analytical framework is based on the homogenization technique, but the method of solution for the local displacement and stress fields borrows concepts previously employed in constructing the higher order theory for functionally graded materials. The resulting closed-form macroscopic and microscopic constitutive equations, valid for both uniaxial and multiaxial loading of periodic materials with elastic and inelastic constitutive phases, can be incorporated into a structural analysis computer code. Consequently, this model now provides an alternative, accurate method.
An Investigation of State-Space Model Fidelity for SSME Data
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2008-01-01
In previous studies, a variety of unsupervised anomaly detection techniques for anomaly detection were applied to SSME (Space Shuttle Main Engine) data. The observed results indicated that the identification of certain anomalies were specific to the algorithmic method under consideration. This is the reason why one of the follow-on goals of these previous investigations was to build an architecture to support the best capabilities of all algorithms. We appeal to that goal here by investigating a cascade, serial architecture for the best performing and most suitable candidates from previous studies. As a precursor to a formal ROC (Receiver Operating Characteristic) curve analysis for validation of resulting anomaly detection algorithms, our primary focus here is to investigate the model fidelity as measured by variants of the AIC (Akaike Information Criterion) for state-space based models. We show that placing constraints on a state-space model during or after the training of the model introduces a modest level of suboptimality. Furthermore, we compare the fidelity of all candidate models including those embodying the cascade, serial architecture. We make recommendations on the most suitable candidates for application to subsequent anomaly detection studies as measured by AIC-based criteria.
A Reduced-Order Model For Zero-Mass Synthetic Jet Actuators
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.; Vatsa, Veer S.
2007-01-01
Accurate details of the general performance of fluid actuators is desirable over a range of flow conditions, within some predetermined error tolerance. Designers typically model actuators with different levels of fidelity depending on the acceptable level of error in each circumstance. Crude properties of the actuator (e.g., peak mass rate and frequency) may be sufficient for some designs, while detailed information is needed for other applications (e.g., multiple actuator interactions). This work attempts to address two primary objectives. The first objective is to develop a systematic methodology for approximating realistic 3-D fluid actuators, using quasi-1-D reduced-order models. Near full fidelity can be achieved with this approach at a fraction of the cost of full simulation and only a modest increase in cost relative to most actuator models used today. The second objective, which is a direct consequence of the first, is to determine the approximate magnitude of errors committed by actuator model approximations of various fidelities. This objective attempts to identify which model (ranging from simple orifice exit boundary conditions to full numerical simulations of the actuator) is appropriate for a given error tolerance.
Automatic 3D high-fidelity traffic interchange modeling using 2D road GIS data
NASA Astrophysics Data System (ADS)
Wang, Jie; Shen, Yuzhong
2011-03-01
3D road models are widely used in many computer applications such as racing games and driving simulations. However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially for those existing in the real world. Real road network contains various elements such as road segments, road intersections and traffic interchanges. Among them, traffic interchanges present the most challenges to model due to their complexity and the lack of height information (vertical position) of traffic interchanges in existing road GIS data. This paper proposes a novel approach that can automatically produce 3D high-fidelity road network models, including traffic interchange models, from real 2D road GIS data that mainly contain road centerline information. The proposed method consists of several steps. The raw road GIS data are first preprocessed to extract road network topology, merge redundant links, and classify road types. Then overlapped points in the interchanges are detected and their elevations are determined based on a set of level estimation rules. Parametric representations of the road centerlines are then generated through link segmentation and fitting, and they have the advantages of arbitrary levels of detail with reduced memory usage. Finally a set of civil engineering rules for road design (e.g., cross slope, superelevation) are selected and used to generate realistic road surfaces. In addition to traffic interchange modeling, the proposed method also applies to other more general road elements. Preliminary results show that the proposed method is highly effective and useful in many applications.
Development of Predictive Energy Management Strategies for Hybrid Electric Vehicles
NASA Astrophysics Data System (ADS)
Baker, David
Studies have shown that obtaining and utilizing information about the future state of vehicles can improve vehicle fuel economy (FE). However, there has been a lack of research into the impact of real-world prediction error on FE improvements, and whether near-term technologies can be utilized to improve FE. This study seeks to research the effect of prediction error on FE. First, a speed prediction method is developed, and trained with real-world driving data gathered only from the subject vehicle (a local data collection method). This speed prediction method informs a predictive powertrain controller to determine the optimal engine operation for various prediction durations. The optimal engine operation is input into a high-fidelity model of the FE of a Toyota Prius. A tradeoff analysis between prediction duration and prediction fidelity was completed to determine what duration of prediction resulted in the largest FE improvement. Results demonstrate that 60-90 second predictions resulted in the highest FE improvement over the baseline, achieving up to a 4.8% FE increase. A second speed prediction method utilizing simulated vehicle-to-vehicle (V2V) communication was developed to understand if incorporating near-term technologies could be utilized to further improve prediction fidelity. This prediction method produced lower variation in speed prediction error, and was able to realize a larger FE improvement over the local prediction method for longer prediction durations, achieving up to 6% FE improvement. This study concludes that speed prediction and prediction-informed optimal vehicle energy management can produce FE improvements with real-world prediction error and drive cycle variability, as up to 85% of the FE benefit of perfect speed prediction was achieved with the proposed prediction methods.
High-Fidelity Modeling for Health Monitoring in Honeycomb Sandwich Structures
NASA Technical Reports Server (NTRS)
Luchinsky, Dimitry G.; Hafiychuk, Vasyl; Smelyanskiy, Vadim; Tyson, Richard W.; Walker, James L.; Miller, Jimmy L.
2011-01-01
High-Fidelity Model of the sandwich composite structure with real geometry is reported. The model includes two composite facesheets, honeycomb core, piezoelectric actuator/sensors, adhesive layers, and the impactor. The novel feature of the model is that it includes modeling of the impact and wave propagation in the structure before and after the impact. Results of modeling of the wave propagation, impact, and damage detection in sandwich honeycomb plates using piezoelectric actuator/sensor scheme are reported. The results of the simulations are compared with the experimental results. It is shown that the model is suitable for analysis of the physics of failure due to the impact and for testing structural health monitoring schemes based on guided wave propagation.
NASA Astrophysics Data System (ADS)
Pinson, Robin Marie
Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant (fuel) optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from ground control. The goal is to autonomously design the optimal powered descent trajectory onboard the spacecraft immediately prior to the descent burn for use during the burn. Compared to a planetary powered landing problem, the challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies, and low thrust vehicles. The nonlinear gravity fields cannot be represented by a constant gravity model nor a Newtonian model. The trajectory design algorithm needs to be robust and efficient to guarantee a designed trajectory and complete the calculations in a reasonable time frame. This research investigates the following questions: Can convex optimization be used to design the minimum propellant powered descent trajectory for a soft landing on an asteroid? Is this method robust and reliable to allow autonomy onboard the spacecraft without interaction from ground control? This research designed a convex optimization based method that rapidly generates the propellant optimal asteroid powered descent trajectory. The solution to the convex optimization problem is the thrust magnitude and direction, which designs and determines the trajectory. The propellant optimal problem was formulated as a second order cone program, a subset of convex optimization, through relaxation techniques by including a slack variable, change of variables, and incorporation of the successive solution method. Convex optimization solvers, especially second order cone programs, are robust, reliable, and are guaranteed to find the global minimum provided one exists. In addition, an outer optimization loop using Brent's method determines the optimal flight time corresponding to the minimum propellant usage over all flight times. Inclusion of additional trajectory constraints, solely vertical motion near the landing site and glide slope, were evaluated. Through a theoretical proof involving the Minimum Principle from Optimal Control Theory and the Karush-Kuhn-Tucker conditions it was shown that the relaxed problem is identical to the original problem at the minimum point. Therefore, the optimal solution of the relaxed problem is an optimal solution of the original problem, referred to as lossless convexification. A key finding is that this holds for all levels of gravity model fidelity. The designed thrust magnitude profiles were the bang-bang predicted by Optimal Control Theory. The first high fidelity gravity model employed was the 2x2 spherical harmonics model assuming a perfect triaxial ellipsoid and placement of the coordinate frame at the asteroid's center of mass and aligned with the semi-major axes. The spherical harmonics model is not valid inside the Brillouin sphere and this becomes relevant for irregularly shaped asteroids. Then, a higher fidelity model was implemented combining the 4x4 spherical harmonics gravity model with the interior spherical Bessel gravity model. All gravitational terms in the equations of motion are evaluated with the position vector from the previous iteration, creating the successive solution method. Methodology success was shown by applying the algorithm to three triaxial ellipsoidal asteroids with four different rotation speeds using the 2x2 gravity model. Finally, the algorithm was tested using the irregularly shaped asteroid, Castalia.
Kumar, Arunaz; Gilmour, Carole; Nestel, Debra; Aldridge, Robyn; McLelland, Gayle; Wallace, Euan
2014-12-01
Core clinical skills acquisition is an essential component of undergraduate medical and midwifery education. Although interprofessional education is an increasingly common format for learning efficient teamwork in clinical medicine, its value in undergraduate education is less clear. We present a collaborative effort from the medical and midwifery schools of Monash University, Melbourne, towards the development of an educational package centred around a core skills-based workshop using low fidelity simulation models in an interprofessional setting. Detailed feedback on the package was positive with respect to the relevance of the teaching content, whether the topic was well taught by task trainers and simulation models used, pitch of level of teaching and perception of confidence gained in performing the skill on a real patient after attending the workshop. Overall, interprofessional core skills training using low fidelity simulation models introduced at an undergraduate level in medicine and midwifery had a good acceptance. © 2014 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
Developing Capture Mechanisms and High-Fidelity Dynamic Models for the MXER Tether System
NASA Technical Reports Server (NTRS)
Canfield, Steven L.
2007-01-01
A team consisting of collaborators from Tennessee Technological University (TTU), Marshall Space Flight Center, BD Systems, and the University of Delaware (herein called the TTU team) conducted specific research and development activities in MXER tether systems during the base period of May 15, 2004 through September 30, 2006 under contract number NNM04AB13C. The team addressed two primary topics related to the MXER tether system: 1) Development of validated high-fidelity dynamic models of an elastic rotating tether and 2) development of feasible mechanisms to enable reliable rendezvous and capture. This contractor report will describe in detail the activities that were performed during the base period of this cycle-2 MXER tether activity and will summarize the results of this funded activity. The primary deliverables of this project were the quad trap, a robust capture mechanism proposed, developed, tested, and demonstrated with a high degree of feasibility and the detailed development of a validated high-fidelity elastic tether dynamic model provided through multiple formulations.
Models and methods for assessing the value of HVDC and MVDC technologies in modern power grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Elizondo, Marcelo A.; O'Brien, James G.
This report reflects the results of U.S. Department of Energy’s (DOE) Grid Modernization project 0074 “Models and methods for assessing the value of HVDC [high-voltage direct current] and MTDC [multi-terminal direct current] technologies in modern power grids.” The work was done by the Pacific Northwest National Laboratory (PNNL) and Oak Ridge National Laboratory (ORNL) in cooperation with Mid-Continent Independent System Operator (MISO) and Siemens. The main motivation of this study was to show the benefit of using direct current (DC) systems larger than those in existence today as they overlap with the alternating current (AC) systems. Proper use of theirmore » flexibility in terms of active/reactive power control and fast response can provide much-needed services to the grid at the same time as moving large blocks of energy to take advantage of cost diversity. Ultimately, the project’s success will enable decision-makers and investors to make well-informed decisions regarding this use of DC systems. This project showed the technical feasibility of HVDC macrogrid for frequency control and congestion relief in addition to bulk power transfers. Industry-established models for commonly used technologies were employed, along with high-fidelity models for recently developed HVDC converter technologies; like the modular multilevel converters (MMCs), a voltage source converters (VSC). Models for General Electric Positive Sequence Load Flow (GE PSLF) and Siemens Power System Simulator (PSS/E), widely used analysis programs, were for the first time adapted to include at the same time both Western Electricity Coordinating Council (WECC) and Eastern Interconnection (EI), the two largest North American interconnections. The high-fidelity models and their control were developed in detail for MMC system and extended to HVDC systems in point-to-point and in three-node multi-terminal configurations. Using a continental-level mixed AC-DC grid model, and using a HVDC macrogrid power flow and transient stability model, the results showed that the HVDC macrogrid relieved congestion and mitigated loop flows in AC networks, and provided up to 24% improvement in frequency responses. These are realistic studies, based on the 2025 heavy summer and EI multi-regional modeling working group (MMWG) 2026 summer peak cases. This work developed high-fidelity models and simulation algorithms to understand the dynamics of MMC. The developed models and simulation algorithms are up to 25 times faster than the existing algorithms. Models and control algorithms for high-fidelity models were designed and tested for point-to-point and multi-terminal configurations. The multi-terminal configuration was tested connecting simplified models of EI, WI, and Electric Reliability Council of Texas (ERCOT). The developed models showed up to 45% improvement in frequency response with the connection of all the three asynchronous interconnections in the United States using fast and advanced DC technologies like the multi-terminal MMC-DC system. Future work will look into developing high-fidelity models of other advanced DC technologies, combining high-fidelity models with the continental-level model, incorporating additional services. More scenarios involving large-scale HVDC and MTDC will be evaluated.« less
Seasonal and diel movements of white sturgeon in the lower columbia river
Parsley, M.J.; Popoff, N.D.; Van Der Leeuw, B. K.; Wright, C.D.
2008-01-01
Continuous monitoring of the movements and depths used by white sturgeon Acipenser transmontanus with acoustic telemetry technologies in the lower Columbia River provided information on diel and seasonal migrations, local movements, and site fidelity. White sturgeon moved to shallower water at night and showed greater activity, inferred from rates of movement, than during daytime. The extent of local movement within a season was variable among fish; some fish readily moved among habitats while the movements of others were more constrained. White sturgeon were absent from the study area (river kilometers 45-52) during winter and returned from upstream during the spring, confirming an upstream seasonal migration in the fall and downstream migration in spring. The return of individual fish and reoccupation of areas previously inhabited showed that some white sturgeon exhibit site fidelity. This work shows that studies seeking to characterize habitat for white sturgeon need to be cognizant of diel migrations and site fidelity. We urge caution in the use of limited fish location data to describe habitats if diel activities and fine-scale movements are not known.
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
2002-01-01
A high-fidelity simulation of a commercial turbofan engine has been created as part of the Numerical Propulsion System Simulation Project. The high-fidelity computer simulation utilizes computer models that were developed at NASA Glenn Research Center in cooperation with turbofan engine manufacturers. The average-passage (APNASA) Navier-Stokes based viscous flow computer code is used to simulate the 3D flow in the compressors and turbines of the advanced commercial turbofan engine. The 3D National Combustion Code (NCC) is used to simulate the flow and chemistry in the advanced aircraft combustor. The APNASA turbomachinery code and the NCC combustor code exchange boundary conditions at the interface planes at the combustor inlet and exit. This computer simulation technique can evaluate engine performance at steady operating conditions. The 3D flow models provide detailed knowledge of the airflow within the fan and compressor, the high and low pressure turbines, and the flow and chemistry within the combustor. The models simulate the performance of the engine at operating conditions that include sea level takeoff and the altitude cruise condition.
NASA Technical Reports Server (NTRS)
Toups, Zachary O.; Hamilton, William A.; Kerne, Andruid
2012-01-01
Team coordination is essential across domains, enabling efficiency and safety. As technology improves, our temptation is to simulate with ever-higher fidelity, by making simulators re-create reality through their physical interfaces, functionality, and by making participants believe they are undertaking the simulated task. However, high-fidelity simulations often miss salient human-human work practices. We introduce the concept of zero-fidelity simulation (ZFS), a move away from literal high-fidelity mimesis of the concrete environment. ZFS alternatively models cooperation and communication as the basis of simulation. The ZFS Team Coordination Game (TeC) is developed from observation of fire emergency response work practice. We identify ways in which team members are mutually dependent on one another for information, and use these as the basis for the ZFS game design. The design creates a need for cooperation by restricting individual activity and requiring communication. The present research analyzes the design of interdependence in the validated ZFS TeC game. We successfully simulate interdependence between roles in emergency response without simulating the concrete environment.
Kautz, Tiffany F; Guerbois, Mathilde; Khanipov, Kamil; Yun, Ruimei; Warmbrod, Kelsey L; Fofanov, Yuriy; Weaver, Scott C; Forrester, Naomi L
2018-01-01
Abstract During RNA virus replication, there is the potential to incorporate mutations that affect virulence or pathogenesis. For live-attenuated vaccines, this has implications for stability, as replication may result in mutations that either restore the wild-type phenotype via reversion or compensate for the attenuating mutations by increasing virulence (pseudoreversion). Recent studies have demonstrated that altering the mutation rate of an RNA virus is an effective attenuation tool. To validate the safety of low-fidelity mutations to increase vaccine attenuation, several mutations in the RNA-dependent RNA-polymerase (RdRp) were tested in the live-attenuated Venezuelan equine encephalitis virus vaccine strain, TC-83. Next generation sequencing after passage in the presence of mutagens revealed a mutant containing three mutations in the RdRp, TC-83 3x, to have decreased replication fidelity, while a second mutant, TC-83 4x displayed no change in fidelity, but shared many phenotypic characteristics with TC-83 3x. Both mutants exhibited increased, albeit inconsistent attenuation in an infant mouse model, as well as increased immunogenicity and complete protection against lethal challenge of an adult murine model compared with the parent TC-83. During serial passaging in a highly permissive model, the mutants increased in virulence but remained less virulent than the parent TC-83. These results suggest that the incorporation of low-fidelity mutations into the RdRp of live-attenuated vaccines for RNA viruses can confer increased immunogenicity whilst showing some evidence of increased attenuation. However, while in theory such constructs may result in more effective vaccines, the instability of the vaccine phenotype decreases the likelihood of this being an effective vaccine strategy. PMID:29593882
Beck, Alison Kate; Baker, Amanda; Britton, Ben; Wratten, Chris; Bauer, Judith; Wolfenden, Luke; Carter, Gregory
2015-10-15
The confidence with which researchers can comment on intervention efficacy relies on evaluation and consideration of intervention fidelity. Accordingly, there have been calls to increase the transparency with which fidelity methodology is reported. Despite this, consideration and/or reporting of fidelity methods remains poor. We seek to address this gap by describing the methodology for promoting and facilitating the evaluation of intervention fidelity in The EAT (Eating As Treatment) project: a multi-site stepped wedge randomised controlled trial of a dietitian delivered behaviour change counselling intervention to improve nutrition (primary outcome) in head and neck cancer patients undergoing radiotherapy. In accordance with recommendations from the National Institutes of Health Behaviour Change Consortium Treatment Fidelity Workgroup, we sought to maximise fidelity in this stepped wedge randomised controlled trial via strategies implemented from study design through to provider training, intervention delivery and receipt. As the EAT intervention is designed to be incorporated into standard dietetic consultations, we also address unique challenges for translational research. We offer a strong model for improving the quality of translational findings via real world application of National Institutes of Health Behaviour Change Consortium recommendations. Greater transparency in the reporting of behaviour change research is an important step in improving the progress and quality of behaviour change research. ACTRN12613000320752 (Date of registration 21 March 2013).
NOYMER, ANDREW
2009-01-01
This paper describes two related epidemic models of rumor transmission in an age-structured population. Rumors share with communicable disease certain basic aspects, which means that formal models of epidemics may be applied to the transmission of rumors. The results show that rumors may become entrenched very quickly and persist for a long time, even when skeptics are modeled to take an active role in trying to convince others that the rumor is false. This is a macrophenomeon, because individuals eventually cease to believe the rumor, but are replaced by new recruits. This replacement of former believers by new ones is an aspect of all the models, but the approach to stability is quicker, and involves smaller chance of extinction, in the model where skeptics actively try to counter the rumor, as opposed to the model where interest is naturally lost by believers. Skeptics hurt their own cause. The result shows that including age, or a variable for which age is a proxy (e.g., experience), can improve model fidelity and yield important insights. PMID:20351799
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
A Hybrid Process Fidelity Assessment in a Home-based Randomized Clinical Trial
WILDE, MARY H.; LIEBEL, DIANNE; FAIRBANKS, EILEEN; WILSON, PAULA; LASH, MARGARET; SHAH, SHIVANI; McDONALD, MARGARET V.; BRASCH, JUDITH; ZHANG, FENG; SCHEID, EILEEN; McMAHON, JAMES M.
2016-01-01
A process fidelity assessment was conducted as a nested study within a home-based randomized clinical trial teaching self-management to 101 long-term indwelling urinary catheter users in the treatment group. Our hybrid model combined external assessments (outside observations and tape recordings) with internal evaluation methods (through study nurse forms and notes) for a comprehensive process fidelity assessment. Barriers, patient-related issues, and nurse perspectives were identified demonstrating the complexity in home care intervention research. The complementary and synergistic approaches provided in depth information about the context of the delivery and the impact of the intervention on study outcomes. PMID:25894688
High-Fidelity Quantum Logic Gates Using Trapped-Ion Hyperfine Qubits.
Ballance, C J; Harty, T P; Linke, N M; Sepiol, M A; Lucas, D M
2016-08-05
We demonstrate laser-driven two-qubit and single-qubit logic gates with respective fidelities 99.9(1)% and 99.9934(3)%, significantly above the ≈99% minimum threshold level required for fault-tolerant quantum computation, using qubits stored in hyperfine ground states of calcium-43 ions held in a room-temperature trap. We study the speed-fidelity trade-off for the two-qubit gate, for gate times between 3.8 μs and 520 μs, and develop a theoretical error model which is consistent with the data and which allows us to identify the principal technical sources of infidelity.
ERIC Educational Resources Information Center
Cevallos, Pedro F., Jr.
2009-01-01
This dissertation was a single case study with Green Dot Public Schools (GDPS) describing their rapid scale-up process. Specifically, it investigates the phenomenon of the inherent tension between maintaining the fidelity of the original model school's design, culture and values with local adaptation of the brand by stakeholders at the expansion…
ERIC Educational Resources Information Center
Cevallos, Pedro Felipe, Jr.
2009-01-01
This dissertation was a single case study with Green Dot Public Schools (GDPS) describing their rapid scale-up process. Specifically, it investigates the phenomenon of the inherent tension between maintaining the fidelity of the original model school's design, culture and values with local adaptation of the brand by stakeholders at the expansion…
ERIC Educational Resources Information Center
Schechter, Rachel L.; Kazakoff, Elizabeth R.; Bundschuh, Kristine; Prescott, Jen Elise; Macaruso, Paul
2017-01-01
The number of K-12 classrooms adopting blended learning models is rapidly increasing and represents a cultural shift in teaching and learning; however, fidelity of implementation of these new blended learning programs varies widely. This study aimed to examine the role of teacher engagement in student motivation and achievement in a blended…
High-Fidelity Multidisciplinary Design Optimization of Aircraft Configurations
NASA Technical Reports Server (NTRS)
Martins, Joaquim R. R. A.; Kenway, Gaetan K. W.; Burdette, David; Jonsson, Eirikur; Kennedy, Graeme J.
2017-01-01
To evaluate new airframe technologies we need design tools based on high-fidelity models that consider multidisciplinary interactions early in the design process. The overarching goal of this NRA is to develop tools that enable high-fidelity multidisciplinary design optimization of aircraft configurations, and to apply these tools to the design of high aspect ratio flexible wings. We develop a geometry engine that is capable of quickly generating conventional and unconventional aircraft configurations including the internal structure. This geometry engine features adjoint derivative computation for efficient gradient-based optimization. We also added overset capability to a computational fluid dynamics solver, complete with an adjoint implementation and semiautomatic mesh generation. We also developed an approach to constraining buffet and started the development of an approach for constraining utter. On the applications side, we developed a new common high-fidelity model for aeroelastic studies of high aspect ratio wings. We performed optimal design trade-o s between fuel burn and aircraft weight for metal, conventional composite, and carbon nanotube composite wings. We also assessed a continuous morphing trailing edge technology applied to high aspect ratio wings. This research resulted in the publication of 26 manuscripts so far, and the developed methodologies were used in two other NRAs. 1
NASA Technical Reports Server (NTRS)
Kim, Won S.; Bejczy, Antal K.
1993-01-01
A highly effective predictive/preview display technique for telerobotic servicing in space under several seconds communication time delay has been demonstrated on a large laboratory scale in May 1993, involving the Jet Propulsion Laboratory as the simulated ground control station and, 2500 miles away, the Goddard Space Flight Center as the simulated satellite servicing set-up. The technique is based on a high-fidelity calibration procedure that enables a high-fidelity overlay of 3-D graphics robot arm and object models over given 2-D TV camera images of robot arm and objects. To generate robot arm motions, the operator can confidently interact in real time with the graphics models of the robot arm and objects overlaid on an actual camera view of the remote work site. The technique also enables the operator to generate high-fidelity synthetic TV camera views showing motion events that are hidden in a given TV camera view or for which no TV camera views are available. The positioning accuracy achieved by this technique for a zoomed-in camera setting was about +/-5 mm, well within the allowable +/-12 mm error margin at the insertion of a 45 cm long tool in the servicing task.
Gautestad, Arild O; Mysterud, Atle
2013-01-01
The Lévy flight foraging hypothesis predicts a transition from scale-free Lévy walk (LW) to scale-specific Brownian motion (BM) as an animal moves from resource-poor towards resource-rich environment. However, the LW-BM continuum implies a premise of memory-less search, which contradicts the cognitive capacity of vertebrates. We describe methods to test if apparent support for LW-BM transitions may rather be a statistical artifact from movement under varying intensity of site fidelity. A higher frequency of returns to previously visited patches (stronger site fidelity) may erroneously be interpreted as a switch from LW towards BM. Simulations of scale-free, memory-enhanced space use illustrate how the ratio between return events and scale-free exploratory movement translates to varying strength of site fidelity. An expanded analysis of GPS data of 18 female red deer, Cervus elaphus, strengthens previous empirical support of memory-enhanced and scale-free space use in a northern forest ecosystem. A statistical mechanical model architecture that describes foraging under environment-dependent variation of site fidelity may allow for higher realism of optimal search models and movement ecology in general, in particular for vertebrates with high cognitive capacity.
Kalichman, Seth C; Hudd, Katie; Diberto, Giorgio
2010-08-01
Evidence-based interventions are often disseminated in public health education with little known about their operational fidelity. This study examined the delivery of intervention components (operational fidelity) of a widely disseminated HIV prevention program designed for people living with HIV/AIDS named Healthy Relationships. Two hundred ninety-nine agencies that had been trained in the intervention by the Centers for Disease Control and Prevention were contacted, and 122 (41%) completed confidential interviews. Among the 93 agencies that implemented the program, 39 (40%) adapted at least one core element activity, and 21 (23%) dropped an activity. Most adaptations were intended to improve the community fit of the intervention. Agencies believed that funders demand that they implement the intervention with fidelity. Models of technology transfer that emphasize behavior change processes rather than specific curriculum content may advance prevention program dissemination.
NASA Astrophysics Data System (ADS)
Mera, Bruno; Vlachou, Chrysoula; Paunković, Nikola; Vieira, Vítor R.; Viyuela, Oscar
2018-03-01
We study finite-temperature dynamical quantum phase transitions (DQPTs) by means of the fidelity and the interferometric Loschmidt echo (LE) induced metrics. We analyze the associated dynamical susceptibilities (Riemannian metrics), and derive analytic expressions for the case of two-band Hamiltonians. At zero temperature, the two quantities are identical, nevertheless, at finite temperatures they behave very differently. Using the fidelity LE, the zero-temperature DQPTs are gradually washed away with temperature, while the interferometric counterpart exhibits finite-temperature phase transitions. We analyze the physical differences between the two finite-temperature LE generalizations, and argue that, while the interferometric one is more sensitive and can therefore provide more information when applied to genuine quantum (microscopic) systems, when analyzing many-body macroscopic systems, the fidelity-based counterpart is a more suitable quantity to study. Finally, we apply the previous results to two representative models of topological insulators in one and two dimensions.
Implementation fidelity of Multidimensional Family Therapy in an international trial.
Rowe, Cynthia; Rigter, Henk; Henderson, Craig; Gantner, Andreas; Mos, Kees; Nielsen, Philip; Phan, Olivier
2013-04-01
Implementation fidelity, a critical aspect of clinical trials research that establishes adequate delivery of the treatment as prescribed in treatment manuals and protocols, is also essential to the successful implementation of effective programs into new practice settings. Although infrequently studied in the drug abuse field, stronger implementation fidelity has been linked to better outcomes in practice but appears to be more difficult to achieve with greater distance from model developers. In the INternational CAnnabis Need for Treatment (INCANT) multi-national randomized clinical trial, investigators tested the effectiveness of Multidimensional Family Therapy (MDFT) in comparison to individual psychotherapy (IP) in Brussels, Berlin, Paris, The Hague, and Geneva with 450 adolescents with a cannabis use disorder and their parents. This study reports on the implementation fidelity of MDFT across these five Western European sites in terms of treatment adherence, dose and program differentiation, and discusses possible implications for international implementation efforts. Copyright © 2013 Elsevier Inc. All rights reserved.
Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions
NASA Astrophysics Data System (ADS)
van Hooidonk, R.; Huber, M.
2012-03-01
Future widespread coral bleaching and subsequent mortality has been projected using sea surface temperature (SST) data derived from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. Such weaknesses most likely reduce the accuracy of predicting coral bleaching, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends, and their propagation in predictions. To analyze the relative importance of various types of model errors and biases in predicting coral bleaching, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from 24 GCMs 20th century simulations included in the Intergovernmental Panel on Climate Change (IPCC) 4th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate accuracy using an objective measure of forecast quality, the Peirce skill score (PSS). Major findings are that: (1) predictions are most sensitive to the seasonal cycle and inter-annual variability in the ENSO 24-60 months frequency band and (2) because models tend to understate the seasonal cycle at reef locations, they systematically underestimate future bleaching. The methodology we describe can be used to improve the accuracy of bleaching predictions by characterizing the errors and uncertainties involved in the predictions.
Exploring Speech Recognition Technology: Children with Learning and Emotional/Behavioral Disorders.
ERIC Educational Resources Information Center
Faris-Cole, Debra; Lewis, Rena
2001-01-01
Intermediate grade students with disabilities in written expression and emotional/behavioral disorders were trained to use discrete or continuous speech input devices for written work. The study found extreme variability in the fidelity of the devices, PowerSecretary and Dragon NaturallySpeaking ranging from 49 percent to 87 percent. Both devices…
NASA Astrophysics Data System (ADS)
Kerr, Andrew D.
Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.
Knowlden, Adam P; Sharma, Manoj
2014-09-01
Family-and-home-based interventions are an important vehicle for preventing childhood obesity. Systematic process evaluations have not been routinely conducted in assessment of these interventions. The purpose of this study was to plan and conduct a process evaluation of the Enabling Mothers to Prevent Pediatric Obesity Through Web-Based Learning and Reciprocal Determinism (EMPOWER) randomized control trial. The trial was composed of two web-based, mother-centered interventions for prevention of obesity in children between 4 and 6 years of age. Process evaluation used the components of program fidelity, dose delivered, dose received, context, reach, and recruitment. Categorical process evaluation data (program fidelity, dose delivered, dose exposure, and context) were assessed using Program Implementation Index (PII) values. Continuous process evaluation variables (dose satisfaction and recruitment) were assessed using ANOVA tests to evaluate mean differences between groups (experimental and control) and sessions (sessions 1 through 5). Process evaluation results found that both groups (experimental and control) were equivalent, and interventions were administered as planned. Analysis of web-based intervention process objectives requires tailoring of process evaluation models for online delivery. Dissemination of process evaluation results can advance best practices for implementing effective online health promotion programs. © 2014 Society for Public Health Education.
Wolfe, Barrett W; Lowe, Christopher G
2015-08-01
White croaker (Genyonemus lineatus family: Sciaenidae) are a schooling, benthic foraging fish historically associated with soft sediment and wastewater outfalls in southern California. While they are often used as an indicator species due to their high organochlorine contaminant loads, little is known of their movements in relation to contaminated habitats. A Vemco Positioning System acoustic telemetry array was used to collect fine-scale movement data and characterize the site fidelity, area use, and dispersal of 83 white croaker on the Palos Verdes Shelf Superfund Site, California over 27 months. White croaker generally demonstrated low residency and recurrence to the Palos Verdes Shelf, and were observed to be largely nomadic. However, individual behavior was highly variable. Although the entire monitored shelf was visited by tagged white croaker, habitats in 0-200 m proximity to wastewater outfalls and between 25 and 35 m depth were used most frequently. Approximately half of white croaker migrated into Los Angeles and Long Beach Harbors; areas where they may be targeted by subsistence fishers. A model framework for incorporating fish movement data into contaminant exposure estimates was developed to better understanding organochlorine contaminant exposure for planning future remediation and monitoring. Copyright © 2015 Elsevier Ltd. All rights reserved.
Simulation Assisted Risk Assessment: Blast Overpressure Modeling
NASA Technical Reports Server (NTRS)
Lawrence, Scott L.; Gee, Ken; Mathias, Donovan; Olsen, Michael
2006-01-01
A probabilistic risk assessment (PRA) approach has been developed and applied to the risk analysis of capsule abort during ascent. The PRA is used to assist in the identification of modeling and simulation applications that can significantly impact the understanding of crew risk during this potentially dangerous maneuver. The PRA approach is also being used to identify the appropriate level of fidelity for the modeling of those critical failure modes. The Apollo launch escape system (LES) was chosen as a test problem for application of this approach. Failure modes that have been modeled and/or simulated to date include explosive overpressure-based failure, explosive fragment-based failure, land landing failures (range limits exceeded either near launch or Mode III trajectories ending on the African continent), capsule-booster re-contact during separation, and failure due to plume-induced instability. These failure modes have been investigated using analysis tools in a variety of technical disciplines at various levels of fidelity. The current paper focuses on the development and application of a blast overpressure model for the prediction of structural failure due to overpressure, including the application of high-fidelity analysis to predict near-field and headwinds effects.
Toward fidelity between specification and implementation
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Morrison, Jeff; Wu, Yunqing
1994-01-01
This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.
Point-of-care ultrasound education: the increasing role of simulation and multimedia resources.
Lewiss, Resa E; Hoffmann, Beatrice; Beaulieu, Yanick; Phelan, Mary Beth
2014-01-01
This article reviews the current technology, literature, teaching models, and methods associated with simulation-based point-of-care ultrasound training. Patient simulation appears particularly well suited for learning point-of-care ultrasound, which is a required core competency for emergency medicine and other specialties. Work hour limitations have reduced the opportunities for clinical practice, and simulation enables practicing a skill multiple times before it may be used on patients. Ultrasound simulators can be categorized into 2 groups: low and high fidelity. Low-fidelity simulators are usually static simulators, meaning that they have nonchanging anatomic examples for sonographic practice. Advantages are that the model may be reused over time, and some simulators can be homemade. High-fidelity simulators are usually high-tech and frequently consist of many computer-generated cases of virtual sonographic anatomy that can be scanned with a mock probe. This type of equipment is produced commercially and is more expensive. High-fidelity simulators provide students with an active and safe learning environment and make a reproducible standardized assessment of many different ultrasound cases possible. The advantages and disadvantages of using low- versus high-fidelity simulators are reviewed. An additional concept used in simulation-based ultrasound training is blended learning. Blended learning may include face-to-face or online learning often in combination with a learning management system. Increasingly, with simulation and Web-based learning technologies, tools are now available to medical educators for the standardization of both ultrasound skills training and competency assessment.
NASA Technical Reports Server (NTRS)
Turner, Mark G.; Reed, John A.; Ryder, Robert; Veres, Joseph P.
2004-01-01
A Zero-D cycle simulation of the GE90-94B high bypass turbofan engine has been achieved utilizing mini-maps generated from a high-fidelity simulation. The simulation utilizes the Numerical Propulsion System Simulation (NPSS) thermodynamic cycle modeling system coupled to a high-fidelity full-engine model represented by a set of coupled 3D computational fluid dynamic (CFD) component models. Boundary conditions from the balanced, steady state cycle model are used to define component boundary conditions in the full-engine model. Operating characteristics of the 3D component models are integrated into the cycle model via partial performance maps generated from the CFD flow solutions using one-dimensional mean line turbomachinery programs. This paper highlights the generation of the high-pressure compressor, booster, and fan partial performance maps, as well as turbine maps for the high pressure and low pressure turbine. These are actually "mini-maps" in the sense that they are developed only for a narrow operating range of the component. Results are compared between actual cycle data at a take-off condition and the comparable condition utilizing these mini-maps. The mini-maps are also presented with comparison to actual component data where possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutland, Christopher J.
2009-04-26
The Terascale High-Fidelity Simulations of Turbulent Combustion (TSTC) project is a multi-university collaborative effort to develop a high-fidelity turbulent reacting flow simulation capability utilizing terascale, massively parallel computer technology. The main paradigm of the approach is direct numerical simulation (DNS) featuring the highest temporal and spatial accuracy, allowing quantitative observations of the fine-scale physics found in turbulent reacting flows as well as providing a useful tool for development of sub-models needed in device-level simulations. Under this component of the TSTC program the simulation code named S3D, developed and shared with coworkers at Sandia National Laboratories, has been enhanced with newmore » numerical algorithms and physical models to provide predictive capabilities for turbulent liquid fuel spray dynamics. Major accomplishments include improved fundamental understanding of mixing and auto-ignition in multi-phase turbulent reactant mixtures and turbulent fuel injection spray jets.« less
NASA Technical Reports Server (NTRS)
Carr, Peter C.; Mckissick, Burnell T.
1988-01-01
A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.
Schmidt, Barbara; Watt, Kerrianne; McDermott, Robyn; Mills, Jane
2017-07-17
Better systems of care are required to address chronic disease in Indigenous people to ensure they can access all their care needs. Health research has produced evidence about effective models of care and chronic disease strategies to address Indigenous health, however the transfer of research findings into routine clinical practice has proven challenging. Complex interventions, such as those related to chronic disease, have many components that are often poorly implemented and hence rarely achieve implementation fidelity. Implementation fidelity is "the degree to which programs are implemented as intended by the program developer". Knowing if an intervention was implemented as planned is fundamental to knowing what has contributed to the success of an intervention. The aim of this study is to adapt the implementation fidelity framework developed by Keith et al. and apply it to the intervention implemented in phase 1 of the Getting Better at Chronic Care in North Queensland study. The objectives are to quantify the level of implementation fidelity achieved during phase 1 of the study, measure the association between implementation fidelity and health outcomes and to explore the features of the primary health care system that contributed to improved health outcomes. A convergent parallel mixed methods study design will be used to develop a process for assessing implementation fidelity. Information collected via a questionnaire and routine data generated during phase 1 of the study will be used to explain the context for the intervention in each site and develop an implementation fidelity score for each component of the intervention. A weighting will be applied to each component of the intervention to calculate the overall implementation score for each participating community. Statistical analysis will assess the level of association between implementation fidelity scores and health outcomes. Health services research seeks to find solutions to social and technical problems to improve health outcomes. The development of a tool and methodology for assessing implementation fidelity in the Indigenous primary health care context will help address some of the barriers to the translation of research into practice. ACTRN12610000812099 : 29.9.2010.
Multifidelity, Multidisciplinary Design Under Uncertainty with Non-Intrusive Polynomial Chaos
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Gumbert, Clyde
2017-01-01
The primary objective of this work is to develop an approach for multifidelity uncertainty quantification and to lay the framework for future design under uncertainty efforts. In this study, multifidelity is used to describe both the fidelity of the modeling of the physical systems, as well as the difference in the uncertainty in each of the models. For computational efficiency, a multifidelity surrogate modeling approach based on non-intrusive polynomial chaos using the point-collocation technique is developed for the treatment of both multifidelity modeling and multifidelity uncertainty modeling. Two stochastic model problems are used to demonstrate the developed methodologies: a transonic airfoil model and multidisciplinary aircraft analysis model. The results of both showed the multifidelity modeling approach was able to predict the output uncertainty predicted by the high-fidelity model as a significant reduction in computational cost.
Tunability of the circadian action of tetrachromatic solid-state light sources
NASA Astrophysics Data System (ADS)
Žukauskas, A.; Vaicekauskas, R.
2015-01-01
An approach to the optimization of the spectral power distribution of solid-state light sources with the tunable non-image forming photobiological effect on the human circadian rhythm is proposed. For tetrachromatic clusters of model narrow-band (direct-emission) light-emitting diodes (LEDs), the limiting tunability of the circadian action factor (CAF), which is the ratio of the circadian efficacy to luminous efficacy of radiation, was established as a function of constraining color fidelity and luminous efficacy of radiation. For constant correlated color temperatures (CCTs), the CAF of the LED clusters can be tuned above and below that of the corresponding blackbody radiators, whereas for variable CCT, the clusters can have circadian tunability covering that of a temperature-tunable blackbody radiator.
The neural dynamics of task context in free recall.
Polyn, Sean M; Kragel, James E; Morton, Neal W; McCluey, Joshua D; Cohen, Zachary D
2012-03-01
Multivariate pattern analysis (MVPA) is a powerful tool for relating theories of cognitive function to the neural dynamics observed while people engage in cognitive tasks. Here, we use the Context Maintenance and Retrieval model of free recall (CMR; Polyn et al., 2009a) to interpret variability in the strength of task-specific patterns of distributed neural activity as participants study and recall lists of words. The CMR model describes how temporal and source-related (here, encoding task) information combine in a contextual representation that is responsible for guiding memory search. Each studied word in the free-recall paradigm is associated with one of two encoding tasks (size and animacy) that have distinct neural representations during encoding. We find evidence for the context retrieval hypothesis central to the CMR model: Task-specific patterns of neural activity are reactivated during memory search, as the participant recalls an item previously associated with a particular task. Furthermore, we find that the fidelity of these task representations during study is related to task-shifting, the serial position of the studied item, and variability in the magnitude of the recency effect across participants. The CMR model suggests that these effects may be related to a central parameter of the model that controls the rate that an internal contextual representation integrates information from the surrounding environment. Copyright © 2011 Elsevier Ltd. All rights reserved.
Diekemper, Rebecca L.; Irwin, Richard S.; Adams, Todd M.; Altman, Kenneth W.; Barker, Alan F.; Birring, Surinder S.; Blackhall, Fiona; Bolser, Donald C.; Boulet, Louis-Philippe; Braman, Sidney S.; Brightling, Christopher; Callahan-Lyon, Priscilla; Canning, Brendan J.; Chang, Anne B.; Coeytaux, Remy; Cowley, Terrie; Davenport, Paul; Diekemper, Rebecca L.; Ebihara, Satoru; El Solh, Ali A.; Escalante, Patricio; Feinstein, Anthony; Field, Stephen K.; Fisher, Dina; French, Cynthia T.; Gibson, Peter; Gold, Philip; Gould, Michael K.; Grant, Cameron; Harding, Susan M.; Harnden, Anthony; Hill, Adam T.; Irwin, Richard S.; Kahrilas, Peter J.; Keogh, Karina A.; Lane, Andrew P.; Lim, Kaiser; Malesker, Mark A.; Mazzone, Peter; Mazzone, Stuart; McCrory, Douglas C.; McGarvey, Lorcan; Molasiotis, Alex; Murad, M. Hassan; Newcombe, Peter; Nguyen, Huong Q.; Oppenheimer, John; Prezant, David; Pringsheim, Tamara; Restrepo, Marcos I.; Rosen, Mark; Rubin, Bruce; Ryu, Jay H.; Smith, Jaclyn; Tarlo, Susan M.; Vertigan, Anne E.; Wang, Gang; Weinberger, Miles; Weir, Kelly
2015-01-01
BACKGROUND: Successful management of chronic cough has varied in the primary research studies in the reported literature. One of the potential reasons relates to a lack of intervention fidelity to the core elements of the diagnostic and/or therapeutic interventions that were meant to be used by the investigators. METHODS: We conducted a systematic review to summarize the evidence supporting intervention fidelity as an important methodologic consideration in assessing the effectiveness of clinical practice guidelines used for the diagnosis and management of chronic cough. We developed and used a tool to assess for five areas of intervention fidelity. Medline (PubMed), Scopus, and the Cochrane Database of Systematic Reviews were searched from January 1998 to May 2014. Guideline recommendations and suggestions for those conducting research using guidelines or protocols to diagnose and manage chronic cough in the adult were developed and voted upon using CHEST Organization methodology. RESULTS: A total of 23 studies (17 uncontrolled prospective observational, two randomized controlled, and four retrospective observational) met our inclusion criteria. These articles included 3,636 patients. Data could not be pooled for meta-analysis because of heterogeneity. Findings related to the five areas of intervention fidelity included three areas primarily related to the provider and two primarily related to the patients. In the area of study design, 11 of 23 studies appeared to be underpinned by a single guideline/protocol; for training of providers, two of 23 studies reported training, and zero of 23 reported the use of an intervention manual; and for the area of delivery of treatment, when assessing the treatment of gastroesophageal reflux disease, three of 23 studies appeared consistent with the most recent guideline/protocol referenced by the authors. For receipt of treatment, zero of 23 studies mentioned measuring concordance of patient-interventionist understanding of the treatment recommended, and zero of 23 mentioned measuring enactment of treatment, with three of 23 measuring side effects and two of 23 measuring adherence. The overall average intervention fidelity score for all 23 studies was poor (20.74 out of 48). CONCLUSIONS: Only low-quality evidence supports that intervention fidelity strategies were used when conducting primary research in diagnosing and managing chronic cough in adults. This supports the contention that some of the variability in the reporting of patients with unexplained or unresolved chronic cough may be due to lack of intervention fidelity. By following the recommendations and suggestions in this article, researchers will likely be better able to incorporate strategies to address intervention fidelity, thereby strengthening the validity and generalizability of their results that provide the basis for the development of trustworthy guidelines. PMID:25764280
Five-wave-packet quantum error correction based on continuous-variable cluster entanglement
Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi
2015-01-01
Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395
Human dynamic orientation model applied to motion simulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Borah, J. D.
1976-01-01
The Ormsby model of dynamic orientation, in the form of a discrete time computer program was used to predict non-visually induced sensations during an idealized coordinated aircraft turn. To predict simulation fidelity, the Ormsby model was used to assign penalties for incorrect attitude and angular rate perceptions. It was determined that a three rotational degree of freedom simulation should remain faithful to attitude perception even at the expense of incorrect angular rate sensations. Implementing this strategy, a simulation profile for the idealized turn was designed for a Link GAT-1 trainer. A simple optokinetic display was added to improve the fidelity of roll rate sensations.
A high fidelity real-time simulation of a small turboshaft engine
NASA Technical Reports Server (NTRS)
Ballin, Mark G.
1988-01-01
A high-fidelity component-type model and real-time digital simulation of the General Electric T700-GE-700 turboshaft engine were developed for use with current generation real-time blade-element rotor helicopter simulations. A control system model based on the specification fuel control system used in the UH-60A Black Hawk helicopter is also presented. The modeling assumptions and real-time digital implementation methods particular to the simulation of small turboshaft engines are described. The validity of the simulation is demonstrated by comparison with analysis-oriented simulations developed by the manufacturer, available test data, and flight-test time histories.
NASA Astrophysics Data System (ADS)
Rodriguez Marco, Albert
Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.
Klewicki, J. C.; Chini, G. P.; Gibson, J. F.
2017-01-01
Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585
Klewicki, J C; Chini, G P; Gibson, J F
2017-03-13
Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
ERIC Educational Resources Information Center
Escartí, Amparo; Liops-Goig, Ramon; Wright, Paul M.
2018-01-01
Purpose: The Teaching Personal and Social Responsibility (TPSR) model was developed to foster responsibility and teach life skills that transfer to various settings. The purpose of this study was to assess the implementation fidelity of a school-based TPSR program in physical education and other subject areas. Method: Systematic observation was…
ERIC Educational Resources Information Center
Olson, Jonathan R.; Welsh, Janet A.; Perkins, Daniel F.
2015-01-01
In this article, we describe how the recent movement towards evidence-based programming has impacted Extension. We review how the emphasis on implementing such programs with strict fidelity to an underlying program model may be at odds with Extension's strong history of adapting programming to meet the unique needs of children, youth, families,…
Validation of Survivability Validation Protocols
1993-05-01
simu- lation fidelityl. Physical testing of P.i SOS, in either aboveground tests (AGTs) or underground test ( UGTs ), will usually be impossible, due...with some simulation fidelity compromises) are possible in UGTs and/orAGTs. Hence proof tests, if done in statistically significant numbers, can...level. Simulation fidelity and AGT/ UGT /threat correlation will be validation issues here. Extrapolation to threat environments will be done via modeling
Conceptual design and analysis of a dynamic scale model of the Space Station Freedom
NASA Technical Reports Server (NTRS)
Davis, D. A.; Gronet, M. J.; Tan, M. K.; Thorne, J.
1994-01-01
This report documents the conceptual design study performed to evaluate design options for a subscale dynamic test model which could be used to investigate the expected on-orbit structural dynamic characteristics of the Space Station Freedom early build configurations. The baseline option was a 'near-replica' model of the SSF SC-7 pre-integrated truss configuration. The approach used to develop conceptual design options involved three sets of studies: evaluation of the full-scale design and analysis databases, conducting scale factor trade studies, and performing design sensitivity studies. The scale factor trade study was conducted to develop a fundamental understanding of the key scaling parameters that drive design, performance and cost of a SSF dynamic scale model. Four scale model options were estimated: 1/4, 1/5, 1/7, and 1/10 scale. Prototype hardware was fabricated to assess producibility issues. Based on the results of the study, a 1/4-scale size is recommended based on the increased model fidelity associated with a larger scale factor. A design sensitivity study was performed to identify critical hardware component properties that drive dynamic performance. A total of 118 component properties were identified which require high-fidelity replication. Lower fidelity dynamic similarity scaling can be used for non-critical components.
Numerical Investigation of Vertical Plunging Jet Using a Hybrid Multifluid–VOF Multiphase CFD Solver
Shonibare, Olabanji Y.; Wardle, Kent E.
2015-06-28
A novel hybrid multiphase flow solver has been used to conduct simulations of a vertical plunging liquid jet. This solver combines a multifluid methodology with selective interface sharpening to enable simulation of both the initial jet impingement and the long-time entrained bubble plume phenomena. Models are implemented for variable bubble size capturing and dynamic switching of interface sharpened regions to capture transitions between the initially fully segregated flow types into the dispersed bubbly flow regime. It was found that the solver was able to capture the salient features of the flow phenomena under study and areas for quantitative improvement havemore » been explored and identified. In particular, a population balance approach is employed and detailed calibration of the underlying models with experimental data is required to enable quantitative prediction of bubble size and distribution to capture the transition between segregated and dispersed flow types with greater fidelity.« less
Geometric saliency to characterize radar exploitation performance
NASA Astrophysics Data System (ADS)
Nolan, Adam; Keserich, Brad; Lingg, Andrew; Goley, Steve
2014-06-01
Based on the fundamental scattering mechanisms of facetized computer-aided design (CAD) models, we are able to define expected contributions (EC) to the radar signature. The net result of this analysis is the prediction of the salient aspects and contributing vehicle morphology based on the aspect. Although this approach does not provide the fidelity of an asymptotic electromagnetic (EM) simulation, it does provide very fast estimates of the unique scattering that can be consumed by a signature exploitation algorithm. The speed of this approach is particularly relevant when considering the high dimensionality of target configuration variability due to articulating parts which are computationally burdensome to predict. The key scattering phenomena considered in this work are the specular response from a single bounce interaction with surfaces and dihedral response formed between the ground plane and vehicle. Results of this analysis are demonstrated for a set of civilian target models.
NASA Astrophysics Data System (ADS)
Vijayan, Rohan; Conley, Rebekah H.; Thompson, Reid C.; Clements, Logan W.; Miga, Michael I.
2016-03-01
Brain shift describes the deformation that the brain undergoes from mechanical and physiological effects typically during a neurosurgical or neurointerventional procedure. With respect to image guidance techniques, brain shift has been shown to compromise the fidelity of these approaches. In recent work, a computational pipeline has been developed to predict "brain shift" based on preoperatively determined surgical variables (such as head orientation), and subsequently correct preoperative images to more closely match the intraoperative state of the brain. However, a clinical workflow difficulty in the execution of this pipeline has been acquiring the surgical variables by the neurosurgeon prior to surgery. In order to simplify and expedite this process, an Android, Java-based application designed for tablets was developed to provide the neurosurgeon with the ability to orient 3D computer graphic models of the patient's head, determine expected location and size of the craniotomy, and provide the trajectory into the tumor. These variables are exported for use as inputs for the biomechanical models of the preoperative computing phase for the brain shift correction pipeline. The accuracy of the application's exported data was determined by comparing it to data acquired from the physical execution of the surgeon's plan on a phantom head. Results indicated good overlap of craniotomy predictions, craniotomy centroid locations, and estimates of patient's head orientation with respect to gravity. However, improvements in the app interface and mock surgical setup are needed to minimize error.
A systematic review of evidence for education and training interventions in microsurgery.
Ghanem, Ali M; Hachach-Haram, Nadine; Leung, Clement Chi Ming; Myers, Simon Richard
2013-07-01
Over the past decade, driven by advances in educational theory and pressures for efficiency in the clinical environment, there has been a shift in surgical education and training towards enhanced simulation training. Microsurgery is a technical skill with a steep competency learning curve on which the clinical outcome greatly depends. This paper investigates the evidence for educational and training interventions of traditional microsurgical skills courses in order to establish the best evidence practice in education and training and curriculum design. A systematic review of MEDLINE, EMBASE, and PubMed databases was performed to identify randomized control trials looking at educational and training interventions that objectively improved microsurgical skill acquisition, and these were critically appraised using the BestBETs group methodology. The databases search yielded 1,148, 1,460, and 2,277 citations respectively. These were then further limited to randomized controlled trials from which abstract reviews reduced the number to 5 relevant randomised controlled clinical trials. The best evidence supported a laboratory based low fidelity model microsurgical skills curriculum. There was strong evidence that technical skills acquired on low fidelity models transfers to improved performance on higher fidelity human cadaver models and that self directed practice leads to improved technical performance. Although there is significant paucity in the literature to support current microsurgical education and training practices, simulated training on low fidelity models in microsurgery is an effective intervention that leads to acquisition of transferable skills and improved technical performance. Further research to identify educational interventions associated with accelerated skill acquisition is required.
NASA Astrophysics Data System (ADS)
Jayakumar, A.; Turner, A. G.; Johnson, S. J.; Rajagopal, E. N.; Mohandas, Saji; Mitra, A. K.
2017-09-01
Boreal summer sub-seasonal variability in the Asian monsoon, otherwise known as the monsoon intra-seasonal oscillation (MISO), is one of the dominant modes of intraseasonal variability in the tropics, with large impacts on total monsoon rainfall and India's agricultural production. However, our understanding of the mechanisms involved in MISO is incomplete and its simulation in various numerical models is often flawed. In this study, we focus on the objective evaluation of the fidelity of MISO simulation in the Met Office Global Seasonal forecast system version 5 (GloSea5), an initialized coupled model. We analyze a series of nine-member hindcasts from GloSea5 over 1996-2009 during the peak monsoon period (July-August) over the South-Asian monsoon domain focusing on aspects of the time-mean background state and air-sea interaction processes pertinent to MISO. Dominant modes during this period are evident in power spectrum analysis, but propagation and evolution characteristics of the MISO are not realistic. We find that simulated air-sea interactions in the central Indian Ocean are not supportive of MISO initiation in that region, likely a result of the low surface wind variance there. As a consequence, the expected near-quadrature phase relationship between SST and convection is not represented properly over the central equatorial Indian Ocean, and northward propagation from the equator is poorly simulated. This may reinforce the equatorial rainfall mean state bias in GloSea5.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xiao; Gao, Wenzhong; Scholbrock, Andrew
To mitigate the degraded power system inertia and undesirable primary frequency response caused by large-scale wind power integration, the frequency support capabilities of variable-speed wind turbines is studied in this work. This is made possible by controlled inertial response, which is demonstrated on a research turbine - controls advanced research turbine, 3-bladed (CART3). Two distinct inertial control (IC) methods are analysed in terms of their impacts on the grids and the response of the turbine itself. The released kinetic energy in the IC methods are determined by the frequency measurement or shaped active power reference in the turbine speed-power plane.more » The wind turbine model is based on the high-fidelity turbine simulator fatigue, aerodynamic, structures and turbulence, which constitutes the aggregated wind power plant model with the simplified power converter model. The IC methods are implemented over the baseline CART3 controller, evaluated in the modified 9-bus and 14-bus testing power grids considering different wind speeds and different wind power penetration levels. The simulation results provide various insights on designing such kinds of ICs. The authors calculate the short-term dynamic equivalent loads and give a discussion about the turbine structural loadings related to the inertial response.« less
Interannual rainfall variability over China in the MetUM GA6 and GC2 configurations
NASA Astrophysics Data System (ADS)
Stephan, Claudia Christine; Klingaman, Nicholas P.; Vidale, Pier Luigi; Turner, Andrew G.; Demory, Marie-Estelle; Guo, Liang
2018-05-01
Six climate simulations of the Met Office Unified Model Global Atmosphere 6.0 and Global Coupled 2.0 configurations are evaluated against observations and reanalysis data for their ability to simulate the mean state and year-to-year variability of precipitation over China. To analyse the sensitivity to air-sea coupling and horizontal resolution, atmosphere-only and coupled integrations at atmospheric horizontal resolutions of N96, N216 and N512 (corresponding to ˜ 200, 90 and 40 km in the zonal direction at the equator, respectively) are analysed. The mean and interannual variance of seasonal precipitation are too high in all simulations over China but improve with finer resolution and coupling. Empirical orthogonal teleconnection (EOT) analysis is applied to simulated and observed precipitation to identify spatial patterns of temporally coherent interannual variability in seasonal precipitation. To connect these patterns to large-scale atmospheric and coupled air-sea processes, atmospheric and oceanic fields are regressed onto the corresponding seasonal mean time series. All simulations reproduce the observed leading pattern of interannual rainfall variability in winter, spring and autumn; the leading pattern in summer is present in all but one simulation. However, only in two simulations are the four leading patterns associated with the observed physical mechanisms. Coupled simulations capture more observed patterns of variability and associate more of them with the correct physical mechanism, compared to atmosphere-only simulations at the same resolution. However, finer resolution does not improve the fidelity of these patterns or their associated mechanisms. This shows that evaluating climate models by only geographical distribution of mean precipitation and its interannual variance is insufficient. The EOT analysis adds knowledge about coherent variability and associated mechanisms.
Effects of modeled tropical sea surface temperature variability on coral reef bleaching predictions
NASA Astrophysics Data System (ADS)
Van Hooidonk, R. J.
2011-12-01
Future widespread coral bleaching and subsequent mortality has been projected with sea surface temperature (SST) data from global, coupled ocean-atmosphere general circulation models (GCMs). While these models possess fidelity in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. These model weaknesses likely reduce the skill of coral bleaching predictions, but little attention has been paid to the important issue of understanding potential errors and biases, the interaction of these biases with trends and their propagation in predictions. To analyze the relative importance of various types of model errors and biases on coral reef bleaching predictive skill, various intra- and inter-annual frequency bands of observed SSTs were replaced with those frequencies from GCMs 20th century simulations to be included in the Intergovernmental Panel on Climate Change (IPCC) 5th assessment report. Subsequent thermal stress was calculated and predictions of bleaching were made. These predictions were compared with observations of coral bleaching in the period 1982-2007 to calculate skill using an objective measure of forecast quality, the Peirce Skill Score (PSS). This methodology will identify frequency bands that are important to predicting coral bleaching and it will highlight deficiencies in these bands in models. The methodology we describe can be used to improve future climate model derived predictions of coral reef bleaching and it can be used to better characterize the errors and uncertainty in predictions.
Information theoretic approach for assessing image fidelity in photon-counting arrays.
Narravula, Srikanth R; Hayat, Majeed M; Javidi, Bahram
2010-02-01
The method of photon-counting integral imaging has been introduced recently for three-dimensional object sensing, visualization, recognition and classification of scenes under photon-starved conditions. This paper presents an information-theoretic model for the photon-counting imaging (PCI) method, thereby providing a rigorous foundation for the merits of PCI in terms of image fidelity. This, in turn, can facilitate our understanding of the demonstrated success of photon-counting integral imaging in compressive imaging and classification. The mutual information between the source and photon-counted images is derived in a Markov random field setting and normalized by the source-image's entropy, yielding a fidelity metric that is between zero and unity, which respectively corresponds to complete loss of information and full preservation of information. Calculations suggest that the PCI fidelity metric increases with spatial correlation in source image, from which we infer that the PCI method is particularly effective for source images with high spatial correlation; the metric also increases with the reduction in photon-number uncertainty. As an application to the theory, an image-classification problem is considered showing a congruous relationship between the fidelity metric and classifier's performance.
The mechanism of untargeted mutagenesis in UV-irradiated yeast.
Lawrence, C W; Christensen, R B
1982-01-01
The SOS error-prone repair hypothesis proposes that untargeted and targeted mutations in E. coli both result from the inhibition of polymerase functions that normally maintain fidelity, and that this is a necessary precondition for translesion synthesis. Using mating experiments with excision deficient strains of Bakers' yeast, Saccharomyces cerevisiae, we find that up to 40% of cycl-91 revertants induced by UV are untargeted, showing that a reduction in fidelity is also found in irradiated cells of this organism. We are, however, unable to detect the induction or activation of any diffusible factor capable of inhibiting fidelity, and therefore suggest that untargeted and targeted mutations are the consequence of largely different processes. We propose that these observations are best explained in terms of a limited fidelity model. Untargeted mutations are thought to result from the limited capacity of processes which normally maintain fidelity, which are active during replication on both irradiated and unirradiated templates. Even moderate UV fluences saturate this capacity, leading to competition for the limited resource. Targeted mutations are believed to result from the limited, though far from negligible, capacity of lesions like pyrimidine dimers to form Watson-Crick base pairs.
NASA Astrophysics Data System (ADS)
Arnold, J.; Gutmann, E. D.; Clark, M. P.; Nijssen, B.; Vano, J. A.; Addor, N.; Wood, A.; Newman, A. J.; Mizukami, N.; Brekke, L. D.; Rasmussen, R.; Mendoza, P. A.
2016-12-01
Climate change narratives for water-resource applications must represent the change signals contextualized by hydroclimatic process variability and uncertainty at multiple scales. Building narratives of plausible change includes assessing uncertainties across GCM structure, internal climate variability, climate downscaling methods, and hydrologic models. Work with this linked modeling chain has dealt mostly with GCM sampling directed separately to either model fidelity (does the model correctly reproduce the physical processes in the world?) or sensitivity (of different model responses to CO2 forcings) or diversity (of model type, structure, and complexity). This leaves unaddressed any interactions among those measures and with other components in the modeling chain used to identify water-resource vulnerabilities to specific climate threats. However, time-sensitive, real-world vulnerability studies typically cannot accommodate a full uncertainty ensemble across the whole modeling chain, so a gap has opened between current scientific knowledge and most routine applications for climate-changed hydrology. To close that gap, the US Army Corps of Engineers, the Bureau of Reclamation, and the National Center for Atmospheric Research are working on techniques to subsample uncertainties objectively across modeling chain components and to integrate results into quantitative hydrologic storylines of climate-changed futures. Importantly, these quantitative storylines are not drawn from a small sample of models or components. Rather, they stem from the more comprehensive characterization of the full uncertainty space for each component. Equally important from the perspective of water-resource practitioners, these quantitative hydrologic storylines are anchored in actual design and operations decisions potentially affected by climate change. This talk will describe part of our work characterizing variability and uncertainty across modeling chain components and their interactions using newly developed observational data, models and model outputs, and post-processing tools for making the resulting quantitative storylines most useful in practical hydrology applications.
Shuaib, Aban; Hartwell, Adam; Kiss-Toth, Endre; Holcombe, Mike
2016-01-01
Signal transduction through the Mitogen Activated Protein Kinase (MAPK) pathways is evolutionarily highly conserved. Many cells use these pathways to interpret changes to their environment and respond accordingly. The pathways are central to triggering diverse cellular responses such as survival, apoptosis, differentiation and proliferation. Though the interactions between the different MAPK pathways are complex, nevertheless, they maintain a high level of fidelity and specificity to the original signal. There are numerous theories explaining how fidelity and specificity arise within this complex context; spatio-temporal regulation of the pathways and feedback loops are thought to be very important. This paper presents an agent based computational model addressing multi-compartmentalisation and how this influences the dynamics of MAPK cascade activation. The model suggests that multi-compartmentalisation coupled with periodic MAPK kinase (MAPKK) activation may be critical factors for the emergence of oscillation and ultrasensitivity in the system. Finally, the model also establishes a link between the spatial arrangements of the cascade components and temporal activation mechanisms, and how both contribute to fidelity and specificity of MAPK mediated signalling. PMID:27243235
High Fidelity BWR Fuel Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Su Jong
This report describes the Consortium for Advanced Simulation of Light Water Reactors (CASL) work conducted for completion of the Thermal Hydraulics Methods (THM) Level 3 milestone THM.CFD.P13.03: High Fidelity BWR Fuel Simulation. High fidelity computational fluid dynamics (CFD) simulation for Boiling Water Reactor (BWR) was conducted to investigate the applicability and robustness performance of BWR closures. As a preliminary study, a CFD model with simplified Ferrule spacer grid geometry of NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) benchmark has been implemented. Performance of multiphase segregated solver with baseline boiling closures has been evaluated. Although the mean values of void fractionmore » and exit quality of CFD result for BFBT case 4101-61 agreed with experimental data, the local void distribution was not predicted accurately. The mesh quality was one of the critical factors to obtain converged result. The stability and robustness of the simulation was mainly affected by the mesh quality, combination of BWR closure models. In addition, the CFD modeling of fully-detailed spacer grid geometry with mixing vane is necessary for improving the accuracy of CFD simulation.« less
Perdikaris, Paris; Karniadakis, George Em
2016-05-01
We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. © 2016 The Author(s).
Perdikaris, Paris; Karniadakis, George Em
2016-01-01
We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. PMID:27194481
High-Fidelity Flash Lidar Model Development
NASA Technical Reports Server (NTRS)
Hines, Glenn D.; Pierrottet, Diego F.; Amzajerdian, Farzin
2014-01-01
NASA's Autonomous Landing and Hazard Avoidance Technologies (ALHAT) project is currently developing the critical technologies to safely and precisely navigate and land crew, cargo and robotic spacecraft vehicles on and around planetary bodies. One key element of this project is a high-fidelity Flash Lidar sensor that can generate three-dimensional (3-D) images of the planetary surface. These images are processed with hazard detection and avoidance and hazard relative navigation algorithms, and then are subsequently used by the Guidance, Navigation and Control subsystem to generate an optimal navigation solution. A complex, high-fidelity model of the Flash Lidar was developed in order to evaluate the performance of the sensor and its interaction with the interfacing ALHAT components on vehicles with different configurations and under different flight trajectories. The model contains a parameterized, general approach to Flash Lidar detection and reflects physical attributes such as range and electronic noise sources, and laser pulse temporal and spatial profiles. It also provides the realistic interaction of the laser pulse with terrain features that include varying albedo, boulders, craters slopes and shadows. This paper gives a description of the Flash Lidar model and presents results from the Lidar operating under different scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J; Gao, H
2016-06-15
Purpose: Different from the conventional computed tomography (CT), spectral CT based on energy-resolved photon-counting detectors is able to provide the unprecedented material composition. However, an important missing piece for accurate spectral CT is to incorporate the detector response function (DRF), which is distorted by factors such as pulse pileup and charge-sharing. In this work, we propose material reconstruction methods for spectral CT with DRF. Methods: The polyenergetic X-ray forward model takes the DRF into account for accurate material reconstruction. Two image reconstruction methods are proposed: a direct method based on the nonlinear data fidelity from DRF-based forward model; a linear-data-fidelitymore » based method that relies on the spectral rebinning so that the corresponding DRF matrix is invertible. Then the image reconstruction problem is regularized with the isotropic TV term and solved by alternating direction method of multipliers. Results: The simulation results suggest that the proposed methods provided more accurate material compositions than the standard method without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Conclusion: We have proposed material reconstruction methods for spectral CT with DRF, whichprovided more accurate material compositions than the standard methods without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Jiulong Liu and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
Brunette, Mary F; Asher, Dianne; Whitley, Rob; Lutz, Wilma J; Wieder, Barbara L; Jones, Amanda M; McHugo, Gregory J
2008-09-01
Approximately half of the people who have serious mental illnesses experience a co-occurring substance use disorder at some point in their lifetime. Integrated dual disorders treatment, a program to treat persons with co-occurring disorders, improves outcomes but is not widely available in public mental health settings. This report describes the extent to which this intervention was implemented by 11 community mental health centers participating in a large study of practice implementation. Facilitators and barriers to implementation are described. Trained implementation monitors conducted regular site visits over two years. During visits, monitors interviewed key informants, conducted ethnographic observations of implementation efforts, and assessed fidelity to the practice model. These data were coded and used as a basis for detailed site reports summarizing implementation processes. The authors reviewed the reports and distilled the three top facilitators and barriers for each site. The most prominent cross-site facilitators and barriers were identified. Two sites reached high fidelity, six sites reached moderate fidelity, and three sites remained at low fidelity over the two years. Prominent facilitators and barriers to implementation with moderate to high fidelity were administrative leadership, consultation and training, supervisor mastery and supervision, chronic staff turnover, and finances. Common facilitators and barriers to implementation of integrated dual disorders treatment emerged across sites. The results confirmed the importance of the use of the consultant-trainer in the model of implementation, as well as the need for intensive activities at multiple levels to facilitate implementation. Further research on service implementation is needed, including but not limited to clarifying strategies to overcome barriers.
MacLeod, Molly; Genung, Mark A; Ascher, John S; Winfree, Rachael
2016-11-01
Recent studies of mutualistic networks show that interactions between partners change across years. Both biological mechanisms and chance could drive these patterns, but the relative importance of these factors has not been separated. We established a field experiment consisting of 102 monospecific plots of 17 native plant species, from which we collected 6713 specimens of 52 bee species over four years. We used these data and a null model to determine whether bee species' foraging choices varied more or less over time beyond the variation expected by chance. Thus we provide the first quantitative definition of rewiring and fidelity as these terms are used in the literature on interaction networks. All 52 bee species varied in plant partner choice across years, but for 27 species this variation was indistinguishable from random partner choice. Another 11 species showed rewiring, varying more across years than expected by chance, while 14 species showed fidelity, indicating that they both prefer certain plant species and are consistent in those preferences across years. Our study shows that rewiring and fidelity both exist in mutualist networks, but that once sampling effects have been accounted for, they are less common than has been reported in the ecological literature. © 2016 by the Ecological Society of America.
Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow
NASA Astrophysics Data System (ADS)
Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca
2017-11-01
The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.
Modeling human pilot cue utilization with applications to simulator fidelity assessment.
Zeyada, Y; Hess, R A
2000-01-01
An analytical investigation to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator was undertaken. Data from a NASA Ames Research Center vertical motion simulator study of a simple, single-degree-of-freedom rotorcraft bob-up/down maneuver were employed in the investigation. The study was part of a larger research effort that has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system that included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle, and the motion system. With the exception of time delays that accrued in visual scene production in the simulator, visual scene effects were not included in this study. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity that occurred as the characteristics of the motion system were varied over five configurations. The data from three of the five pilots who participated in the experimental study were analyzed in the fuzzy-inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzy-inference identification can be used to identify changes in simulator fidelity for the task examined.
Towards an Aero-Propulso-Servo-Elasticity Analysis of a Commercial Supersonic Transport
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Kopasakis, George; Chwalowski, Pawel; Sanetrik, Mark D.; Carlson, Jan-Renee; Silva, Walt A.; McNamara, Jack
2016-01-01
This paper covers the development of an aero-propulso-servo-elastic (APSE) model using computational fluid dynamics (CFD) and linear structural deformations. The APSE model provides the integration of the following two previously developed nonlinear dynamic simulations: a variable cycle turbofan engine and an elastic supersonic commercial transport vehicle. The primary focus of this study is to provide a means to include relevant dynamics of a turbomachinery propulsion system into the aeroelastic studies conducted during a vehicle design, which have historically neglected propulsion effects. A high fidelity CFD tool is used here for the integration platform. The elastic vehicle neglecting the propulsion system serves as a comparison of traditional approaches to the APSE results. An overview of the methodology is presented for integrating the propulsion system and elastic vehicle. Static aeroelastic analysis comparisons between the traditional and developed APSE models for a wing tip detection indicate that the propulsion system impact on the vehicle elastic response could increase the detection by approximately ten percent.
Optimisation of driver actions in RWD race car including tyre thermodynamics
NASA Astrophysics Data System (ADS)
Maniowski, Michal
2016-04-01
The paper presents an innovative method for a lap time minimisation by using genetic algorithms for a multi objective optimisation of a race driver-vehicle model. The decision variables consist of 16 parameters responsible for actions of a professional driver (e.g. time traces for brake, accelerator and steering wheel) on a race track part with RH corner. Purpose-built, high fidelity, multibody vehicle model (called 'miMa') is described by 30 generalised coordinates and 440 parameters, crucial in motorsport. Focus is put on modelling of the tyre tread thermodynamics and its influence on race vehicle dynamics. Numerical example considers a Rear Wheel Drive BMW E36 prepared for track day events. In order to improve the section lap time (by 5%) and corner exit velocity (by 4%) a few different driving strategies are found depending on thermal conditions of semi-slick tyres. The process of the race driver adaptation to initially cold or hot tyres is explained.
Abbasi, Samira; Maran, Selva K.; Cao, Ying; Abbasi, Ataollah; Heck, Detlef H.
2017-01-01
Neural coding through inhibitory projection pathways remains poorly understood. We analyze the transmission properties of the Purkinje cell (PC) to cerebellar nucleus (CN) pathway in a modeling study using a data set recorded in awake mice containing respiratory rate modulation. We find that inhibitory transmission from tonically active PCs can transmit a behavioral rate code with high fidelity. We parameterized the required population code in PC activity and determined that 20% of PC inputs to a full compartmental CN neuron model need to be rate-comodulated for transmission of a rate code. Rate covariance in PC inputs also accounts for the high coefficient of variation in CN spike trains, while the balance between excitation and inhibition determines spike rate and local spike train variability. Overall, our modeling study can fully account for observed spike train properties of cerebellar output in awake mice, and strongly supports rate coding in the cerebellum. PMID:28617798
Dynamic Response Testing in an Electrically Heated Reactor Test Facility
NASA Astrophysics Data System (ADS)
Bragg-Sitton, Shannon M.; Morton, T. J.
2006-01-01
Non-nuclear testing can be a valuable tool in the development of a space nuclear power or propulsion system. In a non-nuclear test bed, electric heaters are used to simulate the heat from nuclear fuel. Standard testing allows one to fully assess thermal, heat transfer, and stress related attributes of a given system, but fails to demonstrate the dynamic response that would be present in an integrated, fueled reactor system. The integration of thermal hydraulic hardware tests with simulated neutronic response provides a bridge between electrically heated testing and fueled nuclear testing. By implementing a neutronic response model to simulate the dynamic response that would be expected in a fueled reactor system, one can better understand system integration issues, characterize integrated system response times and response characteristics, and assess potential design improvements at a relatively small fiscal investment. Initial system dynamic response testing was demonstrated on the integrated SAFE-100a heat pipe (HP) cooled, electrically heated reactor and heat exchanger hardware, utilizing a one-group solution to the point kinetics equations to simulate the expected neutronic response of the system. Reactivity feedback calculations were then based on a bulk reactivity feedback coefficient and measured average core temperature. This paper presents preliminary results from similar dynamic testing of a direct drive gas cooled reactor system (DDG), demonstrating the applicability of the testing methodology to any reactor type and demonstrating the variation in system response characteristics in different reactor concepts. Although the HP and DDG designs both utilize a fast spectrum reactor, the method of cooling the reactor differs significantly, leading to a variable system response that can be demonstrated and assessed in a non-nuclear test facility. Planned system upgrades to allow implementation of higher fidelity dynamic testing are also discussed. Proposed DDG testing will utilize a higher fidelity point kinetics model to control core power transients, and reactivity feedback will be based on localized feedback coefficients and several independent temperature measurements taken within the core block. This paper presents preliminary test results and discusses the methodology that will be implemented in follow-on DDG testing and the additional instrumentation required to implement high fidelity dynamic testing.
Vestibular models for design and evaluation of flight simulator motion
NASA Technical Reports Server (NTRS)
Bussolari, S. R.; Sullivan, R. B.; Young, L. R.
1986-01-01
The use of spatial orientation models in the design and evaluation of control systems for motion-base flight simulators is investigated experimentally. The development of a high-fidelity motion drive controller using an optimal control approach based on human vestibular models is described. The formulation and implementation of the optimal washout system are discussed. The effectiveness of the motion washout system was evaluated by studying the response of six motion washout systems to the NASA/AMES Vertical Motion Simulator for a single dash-quick-stop maneuver. The effects of the motion washout system on pilot performance and simulator acceptability are examined. The data reveal that human spatial orientation models are useful for the design and evaluation of flight simulator motion fidelity.
Incorporating Non-Linear Sorption into High Fidelity Subsurface Reactive Transport Models
NASA Astrophysics Data System (ADS)
Matott, L. S.; Rabideau, A. J.; Allen-King, R. M.
2014-12-01
A variety of studies, including multiple NRC (National Research Council) reports, have stressed the need for simulation models that can provide realistic predictions of contaminant behavior during the groundwater remediation process, most recently highlighting the specific technical challenges of "back diffusion and desorption in plume models". For a typically-sized remediation site, a minimum of about 70 million grid cells are required to achieve desired cm-level thickness among low-permeability lenses responsible for driving the back-diffusion phenomena. Such discretization is nearly three orders of magnitude more than is typically seen in modeling practice using public domain codes like RT3D (Reactive Transport in Three Dimensions). Consequently, various extensions have been made to the RT3D code to support efficient modeling of recently proposed dual-mode non-linear sorption processes (e.g. Polanyi with linear partitioning) at high-fidelity scales of grid resolution. These extensions have facilitated development of exploratory models in which contaminants are introduced into an aquifer via an extended multi-decade "release period" and allowed to migrate under natural conditions for centuries. These realistic simulations of contaminant loading and migration provide high fidelity representation of the underlying diffusion and sorption processes that control remediation. Coupling such models with decision support processes is expected to facilitate improved long-term management of complex remediation sites that have proven intractable to conventional remediation strategies.
The dynamics of fidelity over the time course of long-term memory.
Persaud, Kimele; Hemmer, Pernille
2016-08-01
Bayesian models of cognition assume that prior knowledge about the world influences judgments. Recent approaches have suggested that the loss of fidelity from working to long-term (LT) memory is simply due to an increased rate of guessing (e.g. Brady, Konkle, Gill, Oliva, & Alvarez, 2013). That is, recall is the result of either remembering (with some noise) or guessing. This stands in contrast to Bayesian models of cognition while assume that prior knowledge about the world influences judgments, and that recall is a combination of expectations learned from the environment and noisy memory representations. Here, we evaluate the time course of fidelity in LT episodic memory, and the relative contribution of prior category knowledge and guessing, using a continuous recall paradigm. At an aggregate level, performance reflects a high rate of guessing. However, when aggregate data is partitioned by lag (i.e., the number of presentations from study to test), or is un-aggregated, performance appears to be more complex than just remembering with some noise and guessing. We implemented three models: the standard remember-guess model, a three-component remember-guess model, and a Bayesian mixture model and evaluated these models against the data. The results emphasize the importance of taking into account the influence of prior category knowledge on memory. Copyright © 2016 Elsevier Inc. All rights reserved.
MODELING THE TIME VARIABILITY OF SDSS STRIPE 82 QUASARS AS A DAMPED RANDOM WALK
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacLeod, C. L.; Ivezic, Z.; Bullock, E.
2010-10-01
We model the time variability of {approx}9000 spectroscopically confirmed quasars in SDSS Stripe 82 as a damped random walk (DRW). Using 2.7 million photometric measurements collected over 10 yr, we confirm the results of Kelly et al. and Kozlowski et al. that this model can explain quasar light curves at an impressive fidelity level (0.01-0.02 mag). The DRW model provides a simple, fast (O(N) for N data points), and powerful statistical description of quasar light curves by a characteristic timescale ({tau}) and an asymptotic rms variability on long timescales (SF{sub {infinity}}). We searched for correlations between these two variability parametersmore » and physical parameters such as luminosity and black hole mass, and rest-frame wavelength. Our analysis shows SF{sub {infinity}} to increase with decreasing luminosity and rest-frame wavelength as observed previously, and without a correlation with redshift. We find a correlation between SF{sub {infinity}} and black hole mass with a power-law index of 0.18 {+-} 0.03, independent of the anti-correlation with luminosity. We find that {tau} increases with increasing wavelength with a power-law index of 0.17, remains nearly constant with redshift and luminosity, and increases with increasing black hole mass with a power-law index of 0.21 {+-} 0.07. The amplitude of variability is anti-correlated with the Eddington ratio, which suggests a scenario where optical fluctuations are tied to variations in the accretion rate. However, we find an additional dependence on luminosity and/or black hole mass that cannot be explained by the trend with Eddington ratio. The radio-loudest quasars have systematically larger variability amplitudes by about 30%, when corrected for the other observed trends, while the distribution of their characteristic timescale is indistinguishable from that of the full sample. We do not detect any statistically robust differences in the characteristic timescale and variability amplitude between the full sample and the small subsample of quasars detected by ROSAT. Our results provide a simple quantitative framework for generating mock quasar light curves, such as currently used in LSST image simulations.« less
Finite-size effects in Anderson localization of one-dimensional Bose-Einstein condensates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cestari, J. C. C.; Foerster, A.; Gusmao, M. A.
We investigate the disorder-induced localization transition in Bose-Einstein condensates for the Anderson and Aubry-Andre models in the noninteracting limit using exact diagonalization. We show that, in addition to the standard superfluid fraction, other tools such as the entanglement and fidelity can provide clear signatures of the transition. Interestingly, the fidelity exhibits good sensitivity even for small lattices. Effects of the system size on these quantities are analyzed in detail, including the determination of a finite-size-scaling law for the critical disorder strength in the case of the Anderson model.
The Kepler End-to-End Model: Creating High-Fidelity Simulations to Test Kepler Ground Processing
NASA Technical Reports Server (NTRS)
Bryson, Stephen T.; Jenkins, Jon M.; Peters, Dan J.; Tenenbaum, Peter P.; Klaus, Todd C.; Gunter, Jay P.; Cote, Miles T.; Caldwell, Douglas A.
2010-01-01
The Kepler mission is designed to detect the transit of Earth-like planets around Sun-like stars by observing 100,000 stellar targets. Developing and testing the Kepler ground-segment processing system, in particular the data analysis pipeline, requires high-fidelity simulated data. This simulated data is provided by the Kepler End-to-End Model (ETEM). ETEM simulates the astrophysics of planetary transits and other phenomena, properties of the Kepler spacecraft and the format of the downlinked data. Major challenges addressed by ETEM include the rapid production of large amounts of simulated data, extensibility and maintainability.
Flexible aircraft dynamic modeling for dynamic analysis and control synthesis
NASA Technical Reports Server (NTRS)
Schmidt, David K.
1989-01-01
The linearization and simplification of a nonlinear, literal model for flexible aircraft is highlighted. Areas of model fidelity that are critical if the model is to be used for control system synthesis are developed and several simplification techniques that can deliver the necessary model fidelity are discussed. These techniques include both numerical and analytical approaches. An analytical approach, based on first-order sensitivity theory is shown to lead not only to excellent numerical results, but also to closed-form analytical expressions for key system dynamic properties such as the pole/zero factors of the vehicle transfer-function matrix. The analytical results are expressed in terms of vehicle mass properties, vibrational characteristics, and rigid-body and aeroelastic stability derivatives, thus leading to the underlying causes for critical dynamic characteristics.
NASA Technical Reports Server (NTRS)
Justh, H. L.; Justus, C. G.
2007-01-01
The new Mars-GRAM auxiliary profile capability, using data from TES observations, mesoscale model output, or other sources, allows a potentially higher fidelity representation of the atmosphere, and a more accurate way of estimating inherent uncertainty in atmospheric density and winds. Figure 3 indicates that, with nominal value rpscale=1, Mars-GRAM perturbations would tend to overestimate observed or mesoscale-modeled variability. To better represent TES and mesoscale model density perturbations, rpscale values as low as about 0.4 could be used. Some trajectory model implementations of Mars-GRAM allow the user to dynamically change rpscale and rwscale values with altitude. Figure 4 shows that an mscale value of about 1.2 would better replicate wind standard deviations from MRAMS or MMM5 simulations at the Gale, Terby, or Melas sites. By adjusting the rpscale and rwscale values in Mars-GRAM based on figures such as Figure 3 and 4, we can provide more accurate end-to-end simulations for EDL at the candidate MSL landing sites.
Population growth and collapse in a multiagent model of the Kayenta Anasazi in Long House Valley.
Axtell, Robert L; Epstein, Joshua M; Dean, Jeffrey S; Gumerman, George J; Swedlund, Alan C; Harburger, Jason; Chakravarty, Shubha; Hammond, Ross; Parker, Jon; Parker, Miles
2002-05-14
Long House Valley in the Black Mesa area of northeastern Arizona (U.S.) was inhabited by the Kayenta Anasazi from about 1800 before Christ to about anno Domini 1300. These people were prehistoric ancestors of the modern Pueblo cultures of the Colorado Plateau. Paleoenvironmental research based on alluvial geomorphology, palynology, and dendroclimatology permits accurate quantitative reconstruction of annual fluctuations in potential agricultural production (kg of maize per hectare). The archaeological record of Anasazi farming groups from anno Domini 200-1300 provides information on a millennium of sociocultural stasis, variability, change, and adaptation. We report on a multiagent computational model of this society that closely reproduces the main features of its actual history, including population ebb and flow, changing spatial settlement patterns, and eventual rapid decline. The agents in the model are monoagriculturalists, who decide both where to situate their fields as well as the location of their settlements. Nutritional needs constrain fertility. Agent heterogeneity, difficult to model mathematically, is demonstrated to be crucial to the high fidelity of the model.
Population growth and collapse in a multiagent model of the Kayenta Anasazi in Long House Valley
Axtell, Robert L.; Epstein, Joshua M.; Dean, Jeffrey S.; Gumerman, George J.; Swedlund, Alan C.; Harburger, Jason; Chakravarty, Shubha; Hammond, Ross; Parker, Jon; Parker, Miles
2002-01-01
Long House Valley in the Black Mesa area of northeastern Arizona (U.S.) was inhabited by the Kayenta Anasazi from about 1800 before Christ to about anno Domini 1300. These people were prehistoric ancestors of the modern Pueblo cultures of the Colorado Plateau. Paleoenvironmental research based on alluvial geomorphology, palynology, and dendroclimatology permits accurate quantitative reconstruction of annual fluctuations in potential agricultural production (kg of maize per hectare). The archaeological record of Anasazi farming groups from anno Domini 200-1300 provides information on a millennium of sociocultural stasis, variability, change, and adaptation. We report on a multiagent computational model of this society that closely reproduces the main features of its actual history, including population ebb and flow, changing spatial settlement patterns, and eventual rapid decline. The agents in the model are monoagriculturalists, who decide both where to situate their fields as well as the location of their settlements. Nutritional needs constrain fertility. Agent heterogeneity, difficult to model mathematically, is demonstrated to be crucial to the high fidelity of the model. PMID:12011406
Modeling interfacial fracture in Sierra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Arthur A.; Ohashi, Yuki; Lu, Wei-Yang
2013-09-01
This report summarizes computational efforts to model interfacial fracture using cohesive zone models in the SIERRA/SolidMechanics (SIERRA/SM) finite element code. Cohesive surface elements were used to model crack initiation and propagation along predefined paths. Mesh convergence was observed with SIERRA/SM for numerous geometries. As the funding for this project came from the Advanced Simulation and Computing Verification and Validation (ASC V&V) focus area, considerable effort was spent performing verification and validation. Code verification was performed to compare code predictions to analytical solutions for simple three-element simulations as well as a higher-fidelity simulation of a double-cantilever beam. Parameter identification was conductedmore » with Dakota using experimental results on asymmetric double-cantilever beam (ADCB) and end-notched-flexure (ENF) experiments conducted under Campaign-6 funding. Discretization convergence studies were also performed with respect to mesh size and time step and an optimization study was completed for mode II delamination using the ENF geometry. Throughout this verification process, numerous SIERRA/SM bugs were found and reported, all of which have been fixed, leading to over a 10-fold increase in convergence rates. Finally, mixed-mode flexure experiments were performed for validation. One of the unexplained issues encountered was material property variability for ostensibly the same composite material. Since the variability is not fully understood, it is difficult to accurately assess uncertainty when performing predictions.« less
Mihalic, Sharon F; Fagan, Abigail A; Argamaso, Susanne
2008-01-18
Widespread replication of effective prevention programs is unlikely to affect the incidence of adolescent delinquency, violent crime, and substance use until the quality of implementation of these programs by community-based organizations can be assured. This paper presents the results of a process evaluation employing qualitative and quantitative methods to assess the extent to which 432 schools in 105 sites implemented the LifeSkills Training (LST) drug prevention program with fidelity. Regression analysis was used to examine factors influencing four dimensions of fidelity: adherence, dosage, quality of delivery, and student responsiveness. Although most sites faced common barriers, such as finding room in the school schedule for the program, gaining full support from key participants (i.e., site coordinators, principals, and LST teachers), ensuring teacher participation in training workshops, and classroom management difficulties, most schools involved in the project implemented LST with very high levels of fidelity. Across sites, 86% of program objectives and activities required in the three-year curriculum were delivered to students. Moreover, teachers were observed using all four recommended teaching practices, and 71% of instructors taught all the required LST lessons. Multivariate analyses found that highly rated LST program characteristics and better student behavior were significantly related to a greater proportion of material taught by teachers (adherence). Instructors who rated the LST program characteristics as ideal were more likely to teach all lessons (dosage). Student behavior and use of interactive teaching techniques (quality of delivery) were positively related. No variables were related to student participation (student responsiveness). Although difficult, high implementation fidelity by community-based organizations can be achieved. This study suggests some important factors that organizations should consider to ensure fidelity, such as selecting programs with features that minimize complexity while maximizing flexibility. Time constraints in the classroom should be considered when choosing a program. Student behavior also influences program delivery, so schools should train teachers in the use of classroom management skills. This project involved comprehensive program monitoring and technical assistance that likely facilitated the identification and resolution of problems and contributed to the overall high quality of implementation. Schools should recognize the importance of training and technical assistance to ensure quality program delivery.
Coordinating DNA polymerase traffic during high and low fidelity synthesis.
Sutton, Mark D
2010-05-01
With the discovery that organisms possess multiple DNA polymerases (Pols) displaying different fidelities, processivities, and activities came the realization that mechanisms must exist to manage the actions of these diverse enzymes to prevent gratuitous mutations. Although many of the Pols encoded by most organisms are largely accurate, and participate in DNA replication and DNA repair, a sizeable fraction display a reduced fidelity, and act to catalyze potentially error-prone translesion DNA synthesis (TLS) past lesions that persist in the DNA. Striking the proper balance between use of these different enzymes during DNA replication, DNA repair, and TLS is essential for ensuring accurate duplication of the cell's genome. This review highlights mechanisms that organisms utilize to manage the actions of their different Pols. A particular emphasis is placed on discussion of current models for how different Pols switch places with each other at the replication fork during high fidelity replication and potentially error-pone TLS. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Reinke, Wendy M; Herman, Keith C; Stormont, Melissa; Newcomer, Lori; David, Kimberly
2013-11-01
Many school-based interventions to promote student mental health rely on teachers as implementers. Thus, understanding the interplay between the multiple domains of fidelity to the intervention and intervention support systems such as coaching and teacher implementation of new skills is an important aspect of implementation science. This study describes a systematic process for assessing multiple domains of fidelity. Data from a larger efficacy trial of the Incredible Years Teacher Classroom Management (IY TCM) program are utilized. Data on fidelity to the IY TCM workshop training sessions and onsite weekly coaching indicate that workshop leaders and the IY TCM coach implemented the training and coaching model with adequate adherence. Further, workshop leaders' ratings of engagement were associated with teacher implementation of specific praise, following training on this content. Lastly, the IY TCM coach differentiation of teacher exposure to coaching was evaluated and found to be associated with teacher implementation of classroom management practices and student disruptive behavior.
Automating Initial Guess Generation for High Fidelity Trajectory Optimization Tools
NASA Technical Reports Server (NTRS)
Villa, Benjamin; Lantoine, Gregory; Sims, Jon; Whiffen, Gregory
2013-01-01
Many academic studies in spaceflight dynamics rely on simplified dynamical models, such as restricted three-body models or averaged forms of the equations of motion of an orbiter. In practice, the end result of these preliminary orbit studies needs to be transformed into more realistic models, in particular to generate good initial guesses for high-fidelity trajectory optimization tools like Mystic. This paper reviews and extends some of the approaches used in the literature to perform such a task, and explores the inherent trade-offs of such a transformation with a view toward automating it for the case of ballistic arcs. Sample test cases in the libration point regimes and small body orbiter transfers are presented.
Determining the effect of key climate drivers on global hydropower production
NASA Astrophysics Data System (ADS)
Galelli, S.; Ng, J. Y.; Lee, D.; Block, P. J.
2017-12-01
Accounting for about 17% of total global electrical power production, hydropower is arguably the world's main renewable energy source and a key asset to meet Paris climate agreements. A key component of hydropower production is water availability, which depends on both precipitation and multiple drivers of climate variability acting at different spatial and temporal scales. To understand how these drivers impact global hydropower production, we study the relation between four patterns of ocean-atmosphere climate variability (i.e., El Niño Southern Oscillation, Pacific Decadal Oscillation, North Atlantic Oscillation, and Atlantic Multidecadal Oscillation) and monthly time series of electrical power production for over 1,500 hydropower reservoirs—obtained via simulation with a high-fidelity dam model forced with 20th century climate conditions. Notably significant relationships between electrical power productions and climate variability are found in many climate sensitive regions globally, including North and South America, East Asia, West Africa, and Europe. Coupled interactions from multiple, simultaneous climate drivers are also evaluated. Finally, we highlight the importance of using these climate drivers as an additional source of information within reservoir operating rules where the skillful predictability of inflow exists.
NASA Astrophysics Data System (ADS)
Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz
2017-04-01
This paper proposes three multisharpening approaches to enhance the spatial resolution of urban hyperspectral remote sensing images. These approaches, related to linear-quadratic spectral unmixing techniques, use a linear-quadratic nonnegative matrix factorization (NMF) multiplicative algorithm. These methods begin by unmixing the observable high-spectral/low-spatial resolution hyperspectral and high-spatial/low-spectral resolution multispectral images. The obtained high-spectral/high-spatial resolution features are then recombined, according to the linear-quadratic mixing model, to obtain an unobservable multisharpened high-spectral/high-spatial resolution hyperspectral image. In the first designed approach, hyperspectral and multispectral variables are independently optimized, once they have been coherently initialized. These variables are alternately updated in the second designed approach. In the third approach, the considered hyperspectral and multispectral variables are jointly updated. Experiments, using synthetic and real data, are conducted to assess the efficiency, in spatial and spectral domains, of the designed approaches and of linear NMF-based approaches from the literature. Experimental results show that the designed methods globally yield very satisfactory spectral and spatial fidelities for the multisharpened hyperspectral data. They also prove that these methods significantly outperform the used literature approaches.
Predictors of outcomes of assertive outreach teams: a 3-year follow-up study in North East England.
Carpenter, John; Luce, Anna; Wooff, David
2011-06-01
Assertive outreach (AO) is a required component of services for people with severe mental illness in England. However, the claims to its effectiveness have been contested and the relationships between team organisation, including model fidelity, the use of mental health interventions and outcomes for service users remain unclear. Three-year follow up of 33 AO teams was conducted using standardised measures of model fidelity and mental health interventions, and of current location and a range of outcomes for service users (n = 628). Predictors of the number of hospital admissions, mental health and social functioning at T2, and discharge from the team as 'improved' were modelled using multivariate regression analyses. Teams had moderate mean ratings of fidelity to the AO model. All rated highly on the core intervention modalities of engagement, assessment and care co-ordination, but ratings for psychosocial interventions were comparatively low. Two-thirds (462) of service users were still in AO and data were returned on 400 (87%). There was evidence of small improvements in mental health and social functioning and a reduction in the mean number of hospital admissions in the previous 2 years (from 2.09 to 1.39). Poor outcomes were predicted variously by service users' characteristics, previous psychiatric history, poor collaboration with services, homelessness and dual diagnosis. Fidelity to the AO model did not emerge as a predictor of outcome, but the team working for extended hours was associated with more frequent in-patient admissions and less likelihood of discharge from AO. Supportive interventions in daily living, together with the team's use of family and psychological interventions were also associated with poorer outcomes. Possible explanations for these unexpected findings are considered. AO appears to have been quite successful in keeping users engaged over a substantial period and to have an impact in supporting many people to live in the community and to avoid the necessity of psychiatric hospital admission. However, teams should focus on those with a history of hospital admissions, who do not engage well with services and for whom outcomes are less good. Psychosocial interventions should be applied. The relationship between model fidelity, team organisation, mental health interventions and outcomes is not straightforward and deserves further study.
A Transfer of Training Study of Control Loader Dynamics
NASA Technical Reports Server (NTRS)
Cardullo, Frank M.; Stanco, Anthony A.; Kelly, Lon C.; Houck, Jacob A.; Grube, Richard C.
2011-01-01
The control inceptor used in a simulated vehicle is an important part in maintaining the fidelity of a simulation. The force feedback provided by the control inceptor gives the operator important cues to maintain adequate performance. The dynamics of a control inceptor are typically based on a second order spring mass damper system with damping, force gradient, breakout force, and natural frequency parameters. Changing these parameters can have a great effect on pilot or driver control of the vehicle. The neuromuscular system has a very important role in manipulating the control inceptor within a vehicle. Many studies by McRuer, Aponso, and Hess have dealt with modeling the neuromuscular system and quantifying the effects of a high fidelity control loader as compared to a low fidelity control loader. Humans are adaptive in nature and their control behavior changes based on different control loader dynamics. Humans will change their control behavior to maintain tracking bandwidth and minimize tracking error. This paper reports on a quasi-transfer of training experiment which was performed at the NASA Langley Research Center. The quasi transfer of training study used a high fidelity control loader and a low fidelity control loader. Subjects trained in both simulations and then were transferred to the high fidelity control loader simulation. The parameters for the high fidelity control loader were determined from the literature. The low fidelity control loader parameters were found through testing of a simple computer joystick. A disturbance compensatory task is employed. The compensatory task involves implementing a simple horizon out the window display. A disturbance consisting of a sum of sines is used. The task consists of the subject compensating for the disturbance on the roll angle of the aircraft. The vehicle dynamics are represented as 1/s and 1/s2. The subject will try to maintain level flight throughout the experiment. The subjects consist of non-pilots to remove any effects of pilot experience. First, this paper discusses the implementation of the disturbance compensation task. Second, the high and low fidelity parameters used within the experiment are presented. Finally, an explanation of results from the experiments is presented.
Replication fidelity improvement of PMMA microlens array based on weight evaluation and optimization
NASA Astrophysics Data System (ADS)
Jiang, Bing-yan; Shen, Long-jiang; Peng, Hua-jiang; Yin, Xiang-lin
2007-12-01
High replication fidelity is a prerequisite of high quality plastic microlens array in injection molding. But, there's not an economical and practical method to evaluate and improve the replication fidelity until now. Based on part weight evaluation and optimization, this paper presents a new method of replication fidelity improvement. Firstly, a simplified analysis model of PMMA micro columns arrays (5×16) with 200μm diameter was set up. And then, Flow (3D) module of Moldflow MPI6.0 based on Navier-Stokes equations was used to calculate the weight of the micro columns arrays in injection molding. The effects of processing parameters (melt temperature, mold temperature, injection time, packing pressure and packing time) on the part weight were investigated in the simulations. The simulation results showed that the mold temperature and the injection time have important effects on the filling of micro columns; the optimal mold temperature and injection time for better replication fidelity could be determined by the curves of mold temperature vs part weight and injection time vs part weight. At last, the effects of processing parameters on part weight of micro columns array were studied experimentally. The experimental results showed that the increase of melt temperature and mold temperature can make the packing pressure transfer to micro cavity more effectively through runner system, and increase the part weight. From the observation results of the image measuring apparatus, it was discovered that the higher the part weight, the better the filling of the microstructures. In conclusion, part weight can be used to evaluate the replication fidelity of micro-feature structured parts primarily; which is an economical and practical method to improve the replication fidelity of microlens arrays based on weight evaluation and optimization.
Validation and Verification of LADEE Models and Software
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen
2013-01-01
The Lunar Atmosphere Dust Environment Explorer (LADEE) mission will orbit the moon in order to measure the density, composition and time variability of the lunar dust environment. The ground-side and onboard flight software for the mission is being developed using a Model-Based Software methodology. In this technique, models of the spacecraft and flight software are developed in a graphical dynamics modeling package. Flight Software requirements are prototyped and refined using the simulated models. After the model is shown to work as desired in this simulation framework, C-code software is automatically generated from the models. The generated software is then tested in real time Processor-in-the-Loop and Hardware-in-the-Loop test beds. Travelling Road Show test beds were used for early integration tests with payloads and other subsystems. Traditional techniques for verifying computational sciences models are used to characterize the spacecraft simulation. A lightweight set of formal methods analysis, static analysis, formal inspection and code coverage analyses are utilized to further reduce defects in the onboard flight software artifacts. These techniques are applied early and often in the development process, iteratively increasing the capabilities of the software and the fidelity of the vehicle models and test beds.
Revisiting low-fidelity two-fluid models for gas-solids transport
NASA Astrophysics Data System (ADS)
Adeleke, Najeem; Adewumi, Michael; Ityokumbul, Thaddeus
2016-08-01
Two-phase gas-solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas-solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The model equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe-Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.
Rate dependent fractionation of sulfur isotopes in through-flowing systems
NASA Astrophysics Data System (ADS)
Giannetta, M.; Sanford, R. A.; Druhan, J. L.
2017-12-01
The fidelity of reactive transport models in quantifying microbial activity in the subsurface is often improved through the use stable isotopes. However, the accuracy of current predictions for microbially mediated isotope fractionations within open through-flowing systems typically depends on nutrient availability. This disparity arises from the common application of a single `effective' fractionation factor assigned to a given system, despite extensive evidence for variability in the fractionation factor between eutrophic environments and many naturally occurring, nutrient-limited environments. Here, we demonstrate a reactive transport model with the capacity to simulate a variable fractionation factor over a range of microbially mediated reduction rates and constrain the model with experimental data for nutrient limited conditions. Two coupled isotope-specific Monod rate laws for 32S and 34S, constructed to quantify microbial sulfate reduction and predict associated S isotope partitioning, were parameterized using a series of batch reactor experiments designed to minimize microbial growth. In the current study, we implement these parameterized isotope-specific rate laws within an open, through-flowing system to predict variable fractionation with distance as a function of sulfate reduction rate. These predictions are tested through a supporting laboratory experiment consisting of a flow-through column packed with homogenous porous media inoculated with the same species of sulfate reducing bacteria used in the previous batch reactors, Desulfovibrio vulgaris. The collective results of batch reactor and flow-through column experiments support a significant improvement for S isotope predictions in isotope-sensitive multi-component reactive transport models through treatment of rate-dependent fractionation. Such an update to the model will better equip reactive transport software for isotope informed characterization of microbial activity within energy and nutrient limited environments.
2014-01-01
Background Behavioral interventions such as psychotherapy are leading, evidence-based practices for a variety of problems (e.g., substance abuse), but the evaluation of provider fidelity to behavioral interventions is limited by the need for human judgment. The current study evaluated the accuracy of statistical text classification in replicating human-based judgments of provider fidelity in one specific psychotherapy—motivational interviewing (MI). Method Participants (n = 148) came from five previously conducted randomized trials and were either primary care patients at a safety-net hospital or university students. To be eligible for the original studies, participants met criteria for either problematic drug or alcohol use. All participants received a type of brief motivational interview, an evidence-based intervention for alcohol and substance use disorders. The Motivational Interviewing Skills Code is a standard measure of MI provider fidelity based on human ratings that was used to evaluate all therapy sessions. A text classification approach called a labeled topic model was used to learn associations between human-based fidelity ratings and MI session transcripts. It was then used to generate codes for new sessions. The primary comparison was the accuracy of model-based codes with human-based codes. Results Receiver operating characteristic (ROC) analyses of model-based codes showed reasonably strong sensitivity and specificity with those from human raters (range of area under ROC curve (AUC) scores: 0.62 – 0.81; average AUC: 0.72). Agreement with human raters was evaluated based on talk turns as well as code tallies for an entire session. Generated codes had higher reliability with human codes for session tallies and also varied strongly by individual code. Conclusion To scale up the evaluation of behavioral interventions, technological solutions will be required. The current study demonstrated preliminary, encouraging findings regarding the utility of statistical text classification in bridging this methodological gap. PMID:24758152
Andersen, Simone Nyholm; Broberg, Ole
2015-11-01
Current application of work system simulation in participatory ergonomics (PE) design includes a variety of different simulation media. However, the actual influence of the media attributes on the simulation outcome has received less attention. This study investigates two simulation media: full-scale mock-ups and table-top models. The aim is to compare, how the media attributes of fidelity and affordance influence the ergonomics identification and evaluation in PE design of hospital work systems. The results illustrate, how the full-scale mock-ups' high fidelity of room layout and affordance of tool operation support ergonomics identification and evaluation related to the work system entities space and technologies & tools. The table-top models' high fidelity of function relations and affordance of a helicopter view support ergonomics identification and evaluation related to the entity organization. Furthermore, the study addresses the form of the identified and evaluated conditions, being either identified challenges or tangible design criteria. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Regularized maximum pure-state input-output fidelity of a quantum channel
NASA Astrophysics Data System (ADS)
Ernst, Moritz F.; Klesse, Rochus
2017-12-01
As a toy model for the capacity problem in quantum information theory we investigate finite and asymptotic regularizations of the maximum pure-state input-output fidelity F (N ) of a general quantum channel N . We show that the asymptotic regularization F ˜(N ) is lower bounded by the maximum output ∞ -norm ν∞(N ) of the channel. For N being a Pauli channel, we find that both quantities are equal.
Modeling Fault Diagnosis Performance on a Marine Powerplant Simulator.
1985-08-01
two definitions are very similar. They emphasize that fidelity is a two dimensional -:oncept. They also pointed out the measurement prob- lems. Tasks...simulator duplicares cne enscr-: ztimulation, 4. . rnamic motion cues , visual :ues, ec. ?svcno ogicai fidelity is simply the degree to which the trainee...functions is only acceptable if the performance is paced by tne system, i.e., cues from the system serve to initiate elementary, skilled sub-routines
Imperfect construction of microclusters
NASA Astrophysics Data System (ADS)
Schneider, E.; Zhou, K.; Gilbert, G.; Weinstein, Y. S.
2014-01-01
Microclusters are the basic building blocks used to construct cluster states capable of supporting fault-tolerant quantum computation. In this paper, we explore the consequences of errors on microcluster construction using two error models. To quantify the effect of the errors we calculate the fidelity of the constructed microclusters and the fidelity with which two such microclusters can be fused together. Such simulations are vital for gauging the capability of an experimental system to achieve fault tolerance.
Goldberg, Robert; Colt, Henri G; Davoudi, Mohsen; Cherrison, Larry
2009-09-01
Transbronchial needle aspiration (TBNA) is used to sample mediastinal abnormalities and lymph node stations for diagnostic purposes and lung cancer staging. The procedure is underused, operator dependent, and reputed to have a steep learning curve. Other difficulties arise from a bronchoscopist's failure to insert the needle satisfactorily into the target node. The purpose of this study was to evaluate the realism and helpfulness of a lo-fidelity, easily constructed hybrid model used for learning and practicing TBNA. The model is constructed by attaching a porcine tracheobronchial tree to a Laerdal Airway Model mounted on polyvinyl chloride (PVC) piping. Twelve individuals with various levels of bronchoscopy training and experience were given a 15-min introductory PowerPoint presentation on TBNA strategy and planning, execution, and response to complications. Participants then practiced TBNA alone and with guidance, aided by an assistant, as many times as individually necessary to feel comfortable with the procedure. A five-point Likert scale 8-item questionnaire was then completed. Participants were unanimously positive about their experience (mean scores 4.25-4.91). The model was realistic, provided increased comfort with TBNA techniques, and allowed practice of communication skills. This realistic, affordable, and easily constructed hybrid lo-fidelity airway model allows beginner and experienced bronchoscopists opportunities to learn and practice basic TBNA techniques and team communication skills without placing patients at risk.
Resolution dependence of precipitation statistical fidelity in hindcast simulations
O'Brien, Travis A.; Collins, William D.; Kashinath, Karthik; ...
2016-06-19
This article is a U.S. Government work and is in the public domain in the USA. Numerous studies have shown that atmospheric models with high horizontal resolution better represent the physics and statistics of precipitation in climate models. While it is abundantly clear from these studies that high-resolution increases the rate of extreme precipitation, it is not clear whether these added extreme events are “realistic”; whether they occur in simulations in response to the same forcings that drive similar events in reality. In order to understand whether increasing horizontal resolution results in improved model fidelity, a hindcast-based, multiresolution experimental designmore » has been conceived and implemented: the InitiaLIzed-ensemble, Analyze, and Develop (ILIAD) framework. The ILIAD framework allows direct comparison between observed and simulated weather events across multiple resolutions and assessment of the degree to which increased resolution improves the fidelity of extremes. Analysis of 5 years of daily 5 day hindcasts with the Community Earth System Model at horizontal resolutions of 220, 110, and 28 km shows that: (1) these hindcasts reproduce the resolution-dependent increase of extreme precipitation that has been identified in longer-duration simulations, (2) the correspondence between simulated and observed extreme precipitation improves as resolution increases; and (3) this increase in extremes and precipitation fidelity comes entirely from resolved-scale precipitation. Evidence is presented that this resolution-dependent increase in precipitation intensity can be explained by the theory of Rauscher et al. (), which states that precipitation intensifies at high resolution due to an interaction between the emergent scaling (spectral) properties of the wind field and the constraint of fluid continuity.« less
Resolution dependence of precipitation statistical fidelity in hindcast simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, Travis A.; Collins, William D.; Kashinath, Karthik
This article is a U.S. Government work and is in the public domain in the USA. Numerous studies have shown that atmospheric models with high horizontal resolution better represent the physics and statistics of precipitation in climate models. While it is abundantly clear from these studies that high-resolution increases the rate of extreme precipitation, it is not clear whether these added extreme events are “realistic”; whether they occur in simulations in response to the same forcings that drive similar events in reality. In order to understand whether increasing horizontal resolution results in improved model fidelity, a hindcast-based, multiresolution experimental designmore » has been conceived and implemented: the InitiaLIzed-ensemble, Analyze, and Develop (ILIAD) framework. The ILIAD framework allows direct comparison between observed and simulated weather events across multiple resolutions and assessment of the degree to which increased resolution improves the fidelity of extremes. Analysis of 5 years of daily 5 day hindcasts with the Community Earth System Model at horizontal resolutions of 220, 110, and 28 km shows that: (1) these hindcasts reproduce the resolution-dependent increase of extreme precipitation that has been identified in longer-duration simulations, (2) the correspondence between simulated and observed extreme precipitation improves as resolution increases; and (3) this increase in extremes and precipitation fidelity comes entirely from resolved-scale precipitation. Evidence is presented that this resolution-dependent increase in precipitation intensity can be explained by the theory of Rauscher et al. (), which states that precipitation intensifies at high resolution due to an interaction between the emergent scaling (spectral) properties of the wind field and the constraint of fluid continuity.« less
HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.
Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is themore » inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.« less
Parallel methodology to capture cyclic variability in motored engines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin M.; Yang, Xiaofeng; Kuo, Tang-Wei
2016-07-28
Numerical prediction of of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are require to accurately capture the in-cylinder turbulent flowfield, and (ii) CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. In this study, a new methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple single-cycle simulations in parallel bymore » effectively perturbing the simulation parameters such as the initial and boundary conditions. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flowfield is captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flowfield statistics in less than one-tenth of time required for the conventional approach of simulating consecutive engine cycles.« less
NASA Astrophysics Data System (ADS)
Doss, Derek J.; Heiselman, Jon S.; Collins, Jarrod A.; Weis, Jared A.; Clements, Logan W.; Geevarghese, Sunil K.; Miga, Michael I.
2017-03-01
Sparse surface digitization with an optically tracked stylus for use in an organ surface-based image-to-physical registration is an established approach for image-guided open liver surgery procedures. However, variability in sparse data collections during open hepatic procedures can produce disparity in registration alignments. In part, this variability arises from inconsistencies with the patterns and fidelity of collected intraoperative data. The liver lacks distinct landmarks and experiences considerable soft tissue deformation. Furthermore, data coverage of the organ is often incomplete or unevenly distributed. While more robust feature-based registration methodologies have been developed for image-guided liver surgery, it is still unclear how variation in sparse intraoperative data affects registration. In this work, we have developed an application to allow surgeons to study the performance of surface digitization patterns on registration. Given the intrinsic nature of soft-tissue, we incorporate realistic organ deformation when assessing fidelity of a rigid registration methodology. We report the construction of our application and preliminary registration results using four participants. Our preliminary results indicate that registration quality improves as users acquire more experience selecting patterns of sparse intraoperative surface data.
The role of visual attention in predicting driving impairment in older adults.
Hoffman, Lesa; McDowd, Joan M; Atchley, Paul; Dubinsky, Richard
2005-12-01
This study evaluated the role of visual attention (as measured by the DriverScan change detection task and the Useful Field of View Test [UFOV]) in the prediction of driving impairment in 155 adults between the ages of 63 and 87. In contrast to previous research, participants were not oversampled for visual impairment or history of automobile accidents. Although a history of automobile accidents within the past 3 years could not be predicted using any variable, driving performance in a low-fidelity simulator could be significantly predicted by performance in the change detection task and by the divided and selection attention subtests of the UFOV in structural equation models. The sensitivity and specificity of each measure in identifying at-risk drivers were also evaluated with receiver operating characteristic curves.
Finite-time quantum entanglement in propagating squeezed microwaves.
Fedorov, K G; Pogorzalek, S; Las Heras, U; Sanz, M; Yard, P; Eder, P; Fischer, M; Goetz, J; Xie, E; Inomata, K; Nakamura, Y; Di Candia, R; Solano, E; Marx, A; Deppe, F; Gross, R
2018-04-23
Two-mode squeezing is a fascinating example of quantum entanglement manifested in cross-correlations of non-commuting observables between two subsystems. At the same time, these subsystems themselves may contain no quantum signatures in their self-correlations. These properties make two-mode squeezed (TMS) states an ideal resource for applications in quantum communication. Here, we generate propagating microwave TMS states by a beam splitter distributing single mode squeezing emitted from distinct Josephson parametric amplifiers along two output paths. We experimentally study the fundamental dephasing process of quantum cross-correlations in continuous-variable propagating TMS microwave states and accurately describe it with a theory model. In this way, we gain the insight into finite-time entanglement limits and predict high fidelities for benchmark quantum communication protocols such as remote state preparation and quantum teleportation.
Multi-criteria optimization of chassis parameters of Nissan 200 SX for drifting competitions
NASA Astrophysics Data System (ADS)
Maniowski, M.
2016-09-01
The work objective is to increase performance of Nissan 200sx S13 prepared for a quasi-static state of drifting on a circular path with given constant radius (R=15 m) and tyre-road friction coefficient (μ = 0.9). First, a high fidelity “miMA” multibody model of the vehicle is formulated. Then, a multicriteria optimization problem is solved with one of the goals to maximize a stable drift angle (β) of the vehicle. The decision variables contain 11 parameters of the vehicle chassis (describing the wheel suspension stiffness and geometry) and 2 parameters responsible for a driver steering and accelerator actions, that control this extreme closed-loop manoeuvre. The optimized chassis setup results in the drift angle increase by 14% from 35 to 40 deg.
High-Fidelity Simulation in Biomedical and Aerospace Engineering
NASA Technical Reports Server (NTRS)
Kwak, Dochan
2005-01-01
Contents include the following: Introduction / Background. Modeling and Simulation Challenges in Aerospace Engineering. Modeling and Simulation Challenges in Biomedical Engineering. Digital Astronaut. Project Columbia. Summary and Discussion.
Towards Bridging the Gaps in Holistic Transition Prediction via Numerical Simulations
NASA Technical Reports Server (NTRS)
Choudhari, Meelan M.; Li, Fei; Duan, Lian; Chang, Chau-Lyan; Carpenter, Mark H.; Streett, Craig L.; Malik, Mujeeb R.
2013-01-01
The economic and environmental benefits of laminar flow technology via reduced fuel burn of subsonic and supersonic aircraft cannot be realized without minimizing the uncertainty in drag prediction in general and transition prediction in particular. Transition research under NASA's Aeronautical Sciences Project seeks to develop a validated set of variable fidelity prediction tools with known strengths and limitations, so as to enable "sufficiently" accurate transition prediction and practical transition control for future vehicle concepts. This paper provides a summary of selected research activities targeting the current gaps in high-fidelity transition prediction, specifically those related to the receptivity and laminar breakdown phases of crossflow induced transition in a subsonic swept-wing boundary layer. The results of direct numerical simulations are used to obtain an enhanced understanding of the laminar breakdown region as well as to validate reduced order prediction methods.
The effect of model fidelity on prediction of char burnout for single-particle coal combustion
McConnell, Josh; Sutherland, James C.
2016-07-09
In this study, practical simulation of industrial-scale coal combustion relies on the ability to accurately capture the dynamics of coal subprocesses while also ensuring the computational cost remains reasonable. The majority of the residence time occurs post-devolatilization, so it is of great importance that a balance between the computational efficiency and accuracy of char combustion models is carefully considered. In this work, we consider the importance of model fidelity during char combustion by comparing combinations of simple and complex gas and particle-phase chemistry models. Detailed kinetics based on the GRI 3.0 mechanism and infinitely-fast chemistry are considered in the gas-phase.more » The Char Conversion Kinetics model and nth-Order Langmuir–Hinshelwood model are considered for char consumption. For devolatilization, the Chemical Percolation and Devolatilization and Kobayashi-Sarofim models are employed. The relative importance of gasification versus oxidation reactions in air and oxyfuel environments is also examined for various coal types. Results are compared to previously published experimental data collected under laminar, single-particle conditions. Calculated particle temperature histories are strongly dependent on the choice of gas phase and char chemistry models, but only weakly dependent on the chosen devolatilization model. Particle mass calculations were found to be very sensitive to the choice of devolatilization model, but only somewhat sensitive to the choice of gas chemistry and char chemistry models. High-fidelity models for devolatilization generally resulted in particle temperature and mass calculations that were closer to experimentally observed values.« less
The effect of model fidelity on prediction of char burnout for single-particle coal combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
McConnell, Josh; Sutherland, James C.
In this study, practical simulation of industrial-scale coal combustion relies on the ability to accurately capture the dynamics of coal subprocesses while also ensuring the computational cost remains reasonable. The majority of the residence time occurs post-devolatilization, so it is of great importance that a balance between the computational efficiency and accuracy of char combustion models is carefully considered. In this work, we consider the importance of model fidelity during char combustion by comparing combinations of simple and complex gas and particle-phase chemistry models. Detailed kinetics based on the GRI 3.0 mechanism and infinitely-fast chemistry are considered in the gas-phase.more » The Char Conversion Kinetics model and nth-Order Langmuir–Hinshelwood model are considered for char consumption. For devolatilization, the Chemical Percolation and Devolatilization and Kobayashi-Sarofim models are employed. The relative importance of gasification versus oxidation reactions in air and oxyfuel environments is also examined for various coal types. Results are compared to previously published experimental data collected under laminar, single-particle conditions. Calculated particle temperature histories are strongly dependent on the choice of gas phase and char chemistry models, but only weakly dependent on the chosen devolatilization model. Particle mass calculations were found to be very sensitive to the choice of devolatilization model, but only somewhat sensitive to the choice of gas chemistry and char chemistry models. High-fidelity models for devolatilization generally resulted in particle temperature and mass calculations that were closer to experimentally observed values.« less
Rank-sparsity constrained atlas construction and phenotyping
NASA Astrophysics Data System (ADS)
Clark, D. P.; Badea, C. T.
2015-03-01
Atlas construction is of great interest in the medical imaging community as a tool to visually and quantitatively characterize anatomic variability within a population. Because such atlases generally exhibit superior data fidelity relative to the individual data sets from which they are constructed, they have also proven invaluable in numerous informatics applications such as automated segmentation and classification, regularization of individual-specific reconstructions from undersampled data, and for characterizing physiologically relevant functional metrics. Perhaps the most valuable role of an anatomic atlas is not to define what is "normal," but, in fact, to recognize what is "abnormal." Here, we propose and demonstrate a novel anatomic atlas construction strategy that simultaneously recovers the average anatomy and the deviation from average in a visually meaningful way. The proposed approach treats the problem of atlas construction within the context of robust principal component analysis (RPCA) in which the redundant portion of the data (i.e. the low rank atlas) is separated from the spatially and gradient sparse portion of the data unique to each individual (i.e. the sparse variation). In this paper, we demonstrate the application of RPCA to the Shepp-Logan phantom, including several forms of variability encountered with in vivo data: population variability, class variability, contrast variability, and individual variability. We then present preliminary results produced by applying the proposed approach to in vivo, murine cardiac micro-CT data acquired in a model of right ventricle hypertrophy induced by pulmonary arteriole hypertension.
Fredman, Tamar; Whiten, Andrew
2008-04-01
Studies of wild capuchins suggest an important role for social learning, but experiments with captive subjects have generally not supported this. Here we report social learning in two quite different populations of capuchin monkeys (Cebus apella). In experiment 1, human-raised monkeys observed a familiar human model open a foraging box using a tool in one of two alternative ways: levering versus poking. In experiment 2, mother-raised monkeys viewed similar techniques demonstrated by monkey models. A control group in each population saw no model. In both experiments, independent coders detected which technique experimental subjects had seen, thus confirming social learning. Further analyses examined fidelity of copying at three levels of resolution. The human-raised monkeys exhibited fidelity at the highest level, the specific tool use technique witnessed. The lever technique was seen only in monkeys exposed to a levering model, by contrast with controls and those witnessing poke. Mother-reared monkeys instead typically ignored the tool and exhibited fidelity at a lower level, tending only to re-create whichever result the model had achieved by either levering or poking. Nevertheless this level of social learning was associated with significantly greater levels of success in monkeys witnessing a model than in controls, an effect absent in the human-reared population. Results in both populations are consistent with a process of canalization of the repertoire in the direction of the approach witnessed, producing a narrower, socially shaped behavioural profile than among controls who saw no model.
NASA Astrophysics Data System (ADS)
Modgil, Girish A.
Gas turbine engines for aerospace applications have evolved dramatically over the last 50 years through the constant pursuit for better specific fuel consumption, higher thrust-to-weight ratio, lower noise and emissions all while maintaining reliability and affordability. An important step in enabling these improvements is a forced response aeromechanics analysis involving structural dynamics and aerodynamics of the turbine. It is well documented that forced response vibration is a very critical problem in aircraft engine design, causing High Cycle Fatigue (HCF). Pushing the envelope on engine design has led to increased forced response problems and subsequently an increased risk of HCF failure. Forced response analysis is used to assess design feasibility of turbine blades for HCF using a material limit boundary set by the Goodman Diagram envelope that combines the effects of steady and vibratory stresses. Forced response analysis is computationally expensive, time consuming and requires multi-domain experts to finalize a result. As a consequence, high-fidelity aeromechanics analysis is performed deterministically and is usually done at the end of the blade design process when it is very costly to make significant changes to geometry or aerodynamic design. To address uncertainties in the system (engine operating point, temperature distribution, mistuning, etc.) and variability in material properties, designers apply conservative safety factors in the traditional deterministic approach, which leads to bulky designs. Moreover, using a deterministic approach does not provide a calculated risk of HCF failure. This thesis describes a process that begins with the optimal aerodynamic design of a turbomachinery blade developed using surrogate models of high-fidelity analyses. The resulting optimal blade undergoes probabilistic evaluation to generate aeromechanics results that provide a calculated likelihood of failure from HCF. An existing Rolls-Royce High Work Single Stage (HWSS) turbine blisk provides a baseline to demonstrate the process. The generalized polynomial chaos (gPC) toolbox which was developed includes sampling methods and constructs polynomial approximations. The toolbox provides not only the means for uncertainty quantification of the final blade design, but also facilitates construction of the surrogate models used for the blade optimization. This paper shows that gPC , with a small number of samples, achieves very fast rates of convergence and high accuracy in describing probability distributions without loss of detail in the tails . First, an optimization problem maximizes stage efficiency using turbine aerodynamic design rules as constraints; the function evaluations for this optimization are surrogate models from detailed 3D steady Computational Fluid Dynamics (CFD) analyses. The resulting optimal shape provides a starting point for the 3D high-fidelity aeromechanics (unsteady CFD and 3D Finite Element Analysis (FEA)) UQ study assuming three uncertain input parameters. This investigation seeks to find the steady and vibratory stresses associated with the first torsion mode for the HWSS turbine blisk near maximum operating speed of the engine. Using gPC to provide uncertainty estimates of the steady and vibratory stresses enables the creation of a Probabilistic Goodman Diagram, which - to the authors' best knowledge - is the first of its kind using high fidelity aeromechanics for turbomachinery blades. The Probabilistic Goodman Diagram enables turbine blade designers to make more informed design decisions and it allows the aeromechanics expert to assess quantitatively the risk associated with HCF for any mode crossing based on high fidelity simulations.
The chicken foot digital replant training model.
Athanassopoulos, Thanassi; Loh, Charles Yuen Yung
2015-01-01
A simple, readily available digital replantation model in the chicken foot is described. This high fidelity model will hopefully allow trainees in hand surgery to gain further experience in replant surgery prior to clinical application.
Wilcox, Sara; Parra-Medina, Deborah; Felton, Gwen M.; Poston, Mary Elizabeth; McClain, Amanda
2011-01-01
Background Primary care providers are expected to provide lifestyle counseling, yet many barriers exist. Few studies report on adoption and implementation in routine practice. This study reports training, adoption, and implementation of an intervention to promote physical activity (PA) and dietary counseling in community health centers. Methods Providers (n = 30) and nurses (n = 28) from 9 clinics were invited to participate. Adopters completed CD-ROM training in stage-matched, patient-centered counseling and goal setting. Encounters were audio recorded. A subsample was coded for fidelity. Results Fifty-seven percent of providers and nurses adopted the program. Provider counseling was seen in 66% and nurse goal setting in 58% of participant (N = 266) encounters, although audio recordings were lower. Duration of provider counseling and nurse goal setting was 4.9 ± 4.5 and 7.3 ± 3.8 minutes, respectively. Most PA (80%) and diet (94%) goals were stage-appropriate. Although most providers discussed at least 1 behavioral topic, some topics (eg, self-efficacy, social support) were rarely covered. Conclusions A sizeable percentage of providers and nurses completed training, rated it favorably, and delivered lifestyle counseling, although with variable fidelity. With low implementation cost and limited office time required, this model has the potential to be disseminated to improve counseling rates in primary care. PMID:20864755
Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.
2012-01-01
We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068
Steigerwald, Sarah N.; Park, Jason; Hardy, Krista M.; Gillman, Lawrence; Vergis, Ashley S.
2015-01-01
Background Considerable resources have been invested in both low- and high-fidelity simulators in surgical training. The purpose of this study was to investigate if the Fundamentals of Laparoscopic Surgery (FLS, low-fidelity box trainer) and LapVR (high-fidelity virtual reality) training systems correlate with operative performance on the Global Operative Assessment of Laparoscopic Skills (GOALS) global rating scale using a porcine cholecystectomy model in a novice surgical group with minimal laparoscopic experience. Methods Fourteen postgraduate year 1 surgical residents with minimal laparoscopic experience performed tasks from the FLS program and the LapVR simulator as well as a live porcine laparoscopic cholecystectomy. Performance was evaluated using standardized FLS metrics, automatic computer evaluations, and a validated global rating scale. Results Overall, FLS score did not show an association with GOALS global rating scale score on the porcine cholecystectomy. None of the five LapVR task scores were significantly associated with GOALS score on the porcine cholecystectomy. Conclusions Neither the low-fidelity box trainer or the high-fidelity virtual simulator demonstrated significant correlation with GOALS operative scores. These findings offer caution against the use of these modalities for brief assessments of novice surgical trainees, especially for predictive or selection purposes. PMID:26641071
[Low Fidelity Simulation of a Zero-Y Robot
NASA Technical Reports Server (NTRS)
Sweet, Adam
2001-01-01
The item to be cleared is a low-fidelity software simulation model of a hypothetical freeflying robot designed for use in zero gravity environments. This simulation model works with the HCC simulation system that was developed by Xerox PARC and NASA Ames Research Center. HCC has been previously cleared for distribution. When used with the HCC software, the model computes the location and orientation of the simulated robot over time. Failures (such as a broken motor) can be injected into the simulation to produce simulated behavior corresponding to the failure. Release of this simulation will allow researchers to test their software diagnosis systems by attempting to diagnose the simulated failure from the simulated behavior. This model does not contain any encryption software nor can it perform any control tasks that might be export controlled.
Östlund, Ulrika; Bäckström, Britt; Lindh, Viveca; Sundin, Karin; Saveman, Britt-Inger
2015-09-01
A family systems nursing intervention, Family Health Conversation, has been developed in Sweden by adapting the Calgary Family Assessment and Intervention Models and the Illness Beliefs Model. The intervention has several theoretical assumptions, and one way translate the theory into practice is to identify core components. This may produce higher levels of fidelity to the intervention. Besides information about how to implement an intervention in accordance to how it was developed, evaluating whether it was actually implemented as intended is important. Accordingly, we describe the nurses' fidelity to the identified core components of Family Health Conversation. Six nurses, working in alternating pairs, conducted Family Health Conversations with seven families in which a family member younger than 65 had suffered a stroke. The intervention contained a series of three-1-hour conversations held at 2-3 week intervals. The nurses followed a conversation structure based on 12 core components identified from theoretical assumptions. The transcripts of the 21 conversations were analysed using manifest qualitative content analysis with a deductive approach. The 'core components' seemed to be useful even if nurses' fidelity varied among the core components. Some components were followed relatively well, but others were not. This indicates that the process for achieving fidelity to the intervention can be improved, and that it is necessary for nurses to continually learn theory and to practise family systems nursing. We suggest this can be accomplished through reflections, role play and training on the core components. Furthermore, as in this study, joint reflections on how the core components have been implemented can lead to deeper understanding and knowledge of how Family Health Conversation can be delivered as intended. © 2014 Nordic College of Caring Science.
NASA Astrophysics Data System (ADS)
Qi, Di
Turbulent dynamical systems are ubiquitous in science and engineering. Uncertainty quantification (UQ) in turbulent dynamical systems is a grand challenge where the goal is to obtain statistical estimates for key physical quantities. In the development of a proper UQ scheme for systems characterized by both a high-dimensional phase space and a large number of instabilities, significant model errors compared with the true natural signal are always unavoidable due to both the imperfect understanding of the underlying physical processes and the limited computational resources available. One central issue in contemporary research is the development of a systematic methodology for reduced order models that can recover the crucial features both with model fidelity in statistical equilibrium and with model sensitivity in response to perturbations. In the first part, we discuss a general mathematical framework to construct statistically accurate reduced-order models that have skill in capturing the statistical variability in the principal directions of a general class of complex systems with quadratic nonlinearity. A systematic hierarchy of simple statistical closure schemes, which are built through new global statistical energy conservation principles combined with statistical equilibrium fidelity, are designed and tested for UQ of these problems. Second, the capacity of imperfect low-order stochastic approximations to model extreme events in a passive scalar field advected by turbulent flows is investigated. The effects in complicated flow systems are considered including strong nonlinear and non-Gaussian interactions, and much simpler and cheaper imperfect models with model error are constructed to capture the crucial statistical features in the stationary tracer field. Several mathematical ideas are introduced to improve the prediction skill of the imperfect reduced-order models. Most importantly, empirical information theory and statistical linear response theory are applied in the training phase for calibrating model errors to achieve optimal imperfect model parameters; and total statistical energy dynamics are introduced to improve the model sensitivity in the prediction phase especially when strong external perturbations are exerted. The validity of reduced-order models for predicting statistical responses and intermittency is demonstrated on a series of instructive models with increasing complexity, including the stochastic triad model, the Lorenz '96 model, and models for barotropic and baroclinic turbulence. The skillful low-order modeling methods developed here should also be useful for other applications such as efficient algorithms for data assimilation.
Mars, Tom; Ellard, David; Carnes, Dawn; Homer, Kate; Underwood, Martin; Taylor, Stephanie J C
2013-01-01
Objectives The aim of this study was to (1) demonstrate the development and testing of tools and procedures designed to monitor and assess the integrity of a complex intervention for chronic pain (COping with persistent Pain, Effectiveness Research into Self-management (COPERS) course); and (2) make recommendations based on our experiences. Design Fidelity assessment of a two-arm randomised controlled trial intervention, assessing the adherence and competence of the facilitators delivering the intervention. Setting The intervention was delivered in the community in two centres in the UK: one inner city and one a mix of rural and urban locations. Participants 403 people with chronic musculoskeletal pain were enrolled in the intervention arm and 300 attended the self-management course. Thirty lay and healthcare professionals were trained and 24 delivered the courses (2 per course). We ran 31 courses for up to 16 people per course and all were audio recorded. Interventions The course was run over three and a half days; facilitators delivered a semistructured manualised course. Outcomes We designed three measures to evaluate fidelity assessing adherence to the manual, competence and overall impression. Results We evaluated a random sample of four components from each course (n=122). The evaluation forms were reliable and had good face validity. There were high levels of adherence in the delivery: overall adherence was two (maximum 2, IQR 1.67–2.00), facilitator competence exhibited more variability, and overall competence was 1.5 (maximum 2, IQR 1.25–2.00). Overall impression was three (maximum 4, IQR 2.00–3.00). Conclusions Monitoring and assessing adherence and competence at the point of intervention delivery can be realised most efficiently by embedding the principles of fidelity measurement within the design stage of complex interventions and the training and assessment of those delivering the intervention. More work is necessary to ensure that more robust systems of fidelity evaluation accompany the growth of complex interventions. Trial Registration ISRCTN No ISRCTN24426731. PMID:24240140
NASA Technical Reports Server (NTRS)
Taylor, Patrick C.; Baker, Noel C.
2015-01-01
Earth's climate is changing and will continue to change into the foreseeable future. Expected changes in the climatological distribution of precipitation, surface temperature, and surface solar radiation will significantly impact agriculture. Adaptation strategies are, therefore, required to reduce the agricultural impacts of climate change. Climate change projections of precipitation, surface temperature, and surface solar radiation distributions are necessary input for adaption planning studies. These projections are conventionally constructed from an ensemble of climate model simulations (e.g., the Coupled Model Intercomparison Project 5 (CMIP5)) as an equal weighted average, one model one vote. Each climate model, however, represents the array of climate-relevant physical processes with varying degrees of fidelity influencing the projection of individual climate variables differently. Presented here is a new approach, termed the "Intelligent Ensemble, that constructs climate variable projections by weighting each model according to its ability to represent key physical processes, e.g., precipitation probability distribution. This approach provides added value over the equal weighted average method. Physical process metrics applied in the "Intelligent Ensemble" method are created using a combination of NASA and NOAA satellite and surface-based cloud, radiation, temperature, and precipitation data sets. The "Intelligent Ensemble" method is applied to the RCP4.5 and RCP8.5 anthropogenic climate forcing simulations within the CMIP5 archive to develop a set of climate change scenarios for precipitation, temperature, and surface solar radiation in each USDA Farm Resource Region for use in climate change adaptation studies.
Videometric Applications in Wind Tunnels
NASA Technical Reports Server (NTRS)
Burner, A. W.; Radeztsky, R. H.; Liu, Tian-Shu
1997-01-01
Videometric measurements in wind tunnels can be very challenging due to the limited optical access, model dynamics, optical path variability during testing, large range of temperature and pressure, hostile environment, and the requirements for high productivity and large amounts of data on a daily basis. Other complications for wind tunnel testing include the model support mechanism and stringent surface finish requirements for the models in order to maintain aerodynamic fidelity. For these reasons nontraditional photogrammetric techniques and procedures sometimes must be employed. In this paper several such applications are discussed for wind tunnels which include test conditions with Mach number from low speed to hypersonic, pressures from less than an atmosphere to nearly seven atmospheres, and temperatures from cryogenic to above room temperature. Several of the wind tunnel facilities are continuous flow while one is a short duration blowdown facility. Videometric techniques and calibration procedures developed to measure angle of attack, the change in wing twist and bending induced by aerodynamic load, and the effects of varying model injection rates are described. Some advantages and disadvantages of these techniques are given and comparisons are made with non-optical and more traditional video photogrammetric techniques.
Multidisciplinary design and optimization (MDO) methodology for the aircraft conceptual design
NASA Astrophysics Data System (ADS)
Iqbal, Liaquat Ullah
An integrated design and optimization methodology has been developed for the conceptual design of an aircraft. The methodology brings higher fidelity Computer Aided Design, Engineering and Manufacturing (CAD, CAE and CAM) Tools such as CATIA, FLUENT, ANSYS and SURFCAM into the conceptual design by utilizing Excel as the integrator and controller. The approach is demonstrated to integrate with many of the existing low to medium fidelity codes such as the aerodynamic panel code called CMARC and sizing and constraint analysis codes, thus providing the multi-fidelity capabilities to the aircraft designer. The higher fidelity design information from the CAD and CAE tools for the geometry, aerodynamics, structural and environmental performance is provided for the application of the structured design methods such as the Quality Function Deployment (QFD) and the Pugh's Method. The higher fidelity tools bring the quantitative aspects of a design such as precise measurements of weight, volume, surface areas, center of gravity (CG) location, lift over drag ratio, and structural weight, as well as the qualitative aspects such as external geometry definition, internal layout, and coloring scheme early in the design process. The performance and safety risks involved with the new technologies can be reduced by modeling and assessing their impact more accurately on the performance of the aircraft. The methodology also enables the design and evaluation of the novel concepts such as the blended (BWB) and the hybrid wing body (HWB) concepts. Higher fidelity computational fluid dynamics (CFD) and finite element analysis (FEA) allow verification of the claims for the performance gains in aerodynamics and ascertain risks of structural failure due to different pressure distribution in the fuselage as compared with the tube and wing design. The higher fidelity aerodynamics and structural models can lead to better cost estimates that help reduce the financial risks as well. This helps in achieving better designs with reduced risk in lesser time and cost. The approach is shown to eliminate the traditional boundary between the conceptual and the preliminary design stages, combining the two into one consolidated preliminary design phase. Several examples for the validation and utilization of the Multidisciplinary Design and Optimization (MDO) Tool are presented using missions for the Medium and High Altitude Long Range/Endurance Unmanned Aerial Vehicles (UAVs).
Multi-Disciplinary, Multi-Fidelity Discrete Data Transfer Using Degenerate Geometry Forms
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2016-01-01
In a typical multi-fidelity design process, different levels of geometric abstraction are used for different analysis methods, and transitioning from one phase of design to the next often requires a complete re-creation of the geometry. To maintain consistency between lower-order and higher-order analysis results, Vehicle Sketch Pad (OpenVSP) recently introduced the ability to generate and export several degenerate forms of the geometry, representing the type of abstraction required to perform low- to medium-order analysis for a range of aeronautical disciplines. In this research, the functionality of these degenerate models was extended, so that in addition to serving as repositories for the geometric information that is required as input to an analysis, the degenerate models can also store the results of that analysis mapped back onto the geometric nodes. At the same time, the results are also mapped indirectly onto the nodes of lower-order degenerate models using a process called aggregation, and onto higher-order models using a process called disaggregation. The mapped analysis results are available for use by any subsequent analysis in an integrated design and analysis process. A simple multi-fidelity analysis process for a single-aisle subsonic transport aircraft is used as an example case to demonstrate the value of the approach.
Numerics and subgrid-scale modeling in large eddy simulations of stratocumulus clouds.
Pressel, Kyle G; Mishra, Siddhartha; Schneider, Tapio; Kaul, Colleen M; Tan, Zhihong
2017-06-01
Stratocumulus clouds are the most common type of boundary layer cloud; their radiative effects strongly modulate climate. Large eddy simulations (LES) of stratocumulus clouds often struggle to maintain fidelity to observations because of the sharp gradients occurring at the entrainment interfacial layer at the cloud top. The challenge posed to LES by stratocumulus clouds is evident in the wide range of solutions found in the LES intercomparison based on the DYCOMS-II field campaign, where simulated liquid water paths for identical initial and boundary conditions varied by a factor of nearly 12. Here we revisit the DYCOMS-II RF01 case and show that the wide range of previous LES results can be realized in a single LES code by varying only the numerical treatment of the equations of motion and the nature of subgrid-scale (SGS) closures. The simulations that maintain the greatest fidelity to DYCOMS-II observations are identified. The results show that using weighted essentially non-oscillatory (WENO) numerics for all resolved advective terms and no explicit SGS closure consistently produces the highest-fidelity simulations. This suggests that the numerical dissipation inherent in WENO schemes functions as a high-quality, implicit SGS closure for this stratocumulus case. Conversely, using oscillatory centered difference numerical schemes for momentum advection, WENO numerics for scalars, and explicitly modeled SGS fluxes consistently produces the lowest-fidelity simulations. We attribute this to the production of anomalously large SGS fluxes near the cloud tops through the interaction of numerical error in the momentum field with the scalar SGS model.
Numerics and subgrid‐scale modeling in large eddy simulations of stratocumulus clouds
Mishra, Siddhartha; Schneider, Tapio; Kaul, Colleen M.; Tan, Zhihong
2017-01-01
Abstract Stratocumulus clouds are the most common type of boundary layer cloud; their radiative effects strongly modulate climate. Large eddy simulations (LES) of stratocumulus clouds often struggle to maintain fidelity to observations because of the sharp gradients occurring at the entrainment interfacial layer at the cloud top. The challenge posed to LES by stratocumulus clouds is evident in the wide range of solutions found in the LES intercomparison based on the DYCOMS‐II field campaign, where simulated liquid water paths for identical initial and boundary conditions varied by a factor of nearly 12. Here we revisit the DYCOMS‐II RF01 case and show that the wide range of previous LES results can be realized in a single LES code by varying only the numerical treatment of the equations of motion and the nature of subgrid‐scale (SGS) closures. The simulations that maintain the greatest fidelity to DYCOMS‐II observations are identified. The results show that using weighted essentially non‐oscillatory (WENO) numerics for all resolved advective terms and no explicit SGS closure consistently produces the highest‐fidelity simulations. This suggests that the numerical dissipation inherent in WENO schemes functions as a high‐quality, implicit SGS closure for this stratocumulus case. Conversely, using oscillatory centered difference numerical schemes for momentum advection, WENO numerics for scalars, and explicitly modeled SGS fluxes consistently produces the lowest‐fidelity simulations. We attribute this to the production of anomalously large SGS fluxes near the cloud tops through the interaction of numerical error in the momentum field with the scalar SGS model. PMID:28943997
Fidelity to the Cognitive Processing Therapy Protocol: Evaluation of Critical Elements.
Farmer, Courtney C; Mitchell, Karen S; Parker-Guilbert, Kelly; Galovski, Tara E
2017-03-01
The contributions of individual therapy elements to the overall efficacy of evidence-based practices for the treatment of posttraumatic stress disorder (PTSD) are not well-understood. This study first examined the extent to which theoretically important treatment components of Cognitive Processing Therapy (CPT; i.e., skill in Socratic questioning; prioritizing assimilation; attention to practice assignments; emphasis on expression of natural affect) were successfully administered across the course of therapy for 68 PTSD-positive survivors of interpersonal trauma. Therapist fidelity in the administration of these four elements was evaluated in 533 taped CPT sessions of study participants included in one of two randomized controlled CPT treatment trials. Second, we examined therapist fidelity to these components as a predictor of session-to-session PTSD and depression symptom change. Third, follow-up analyses examined the influence of high therapist competence for these four components across an entire course of therapy on symptom change from pre- to posttreatment. Results showed consistently high adherence and more variable competence for these four treatment components. There were no significant effects of therapist fidelity on session-to-session symptom change. However, results showed that overall high therapist competence for "skill in Socratic questioning" and "prioritizing assimilation before overaccommodation" were related to greater client improvement in PTSD severity, but "attention to practice assignments" and "emphasis on expression of natural affect" were not. Overall competence ratings for the four components were not significantly associated with improvement in depressive symptoms. Findings contribute to increased understanding of the relationship between the key treatment components of CPT and symptom change. Copyright © 2016. Published by Elsevier Ltd.
How best to measure implementation of school health curricula: a comparison of three measures.
Resnicow, K; Davis, M; Smith, M; Lazarus-Yaroch, A; Baranowski, T; Baranowski, J; Doyle, C; Wang, D T
1998-06-01
The impact of school health education programs is often attenuated by inadequate teacher implementation. Using data from a school-based nutrition education program delivered in a sample of fifth graders, this study examines the discriminant and predictive validity of three measures of curriculum implementation: class-room observation of fidelity, and two measures of completeness, teacher self-report questionnaire and post-implementation interview. A fourth measure, obtained during teacher observations, that assessed student and teacher interaction and student receptivity to the curriculum (labeled Rapport) was also obtained. Predictive validity was determined by examining the association of implementation measures with three study outcomes; health knowledge, asking behaviors related to fruit and vegetables, and fruit and vegetable intake, assessed by 7-day diary. Of the 37 teachers observed, 21 were observed for two sessions and 16 were observed once. Implementation measures were moderately correlated, an indication of discriminant validity. Predictive validity analyses indicated that the observed fidelity, Rapport and interview measures were significantly correlated with post-test student knowledge. The association between health knowledge and observed fidelity (based on dual observation only), Rapport and interview measures remained significant after adjustment for pre-test knowledge values. None of the implementation variables were significantly associated with student fruit and vegetable intake or asking behaviors controlling for pre-test values. These results indicate that the teacher self-report questionnaire was not a valid measure of implementation completeness in this study. Post-implementation completeness interviews and dual observations of fidelity and Rapport appear to be more valid, and largely independent methods of implementation assessment.
2016-05-24
experimental data. However, the time and length scales, and energy deposition rates in the canonical laboratory flames that have been studied over the...is to obtain high-fidelity experimental data critically needed to validate research codes at relevant conditions, and to develop systematic and...validated with experimental data. However, the time and length scales, and energy deposition rates in the canonical laboratory flames that have been
High-Fidelity Computational Aerodynamics of the Elytron 4S UAV
NASA Technical Reports Server (NTRS)
Ventura Diaz, Patricia; Yoon, Seokkwan; Theodore, Colin R.
2018-01-01
High-fidelity Computational Fluid Dynamics (CFD) have been carried out for the Elytron 4S Unmanned Aerial Vehicle (UAV), also known as the converticopter "proto12". It is the scaled wind tunnel model of the Elytron 4S, an Urban Air Mobility (UAM) concept, a tilt-wing, box-wing rotorcraft capable of Vertical Take-Off and Landing (VTOL). The three-dimensional unsteady Navier-Stokes equations are solved on overset grids employing high-order accurate schemes, dual-time stepping, and a hybrid turbulence model using NASA's CFD code OVERFLOW. The Elytron 4S UAV has been simulated in airplane mode and in helicopter mode.
Application of a High-Fidelity Icing Analysis Method to a Model-Scale Rotor in Forward Flight
NASA Technical Reports Server (NTRS)
Narducci, Robert; Orr, Stanley; Kreeger, Richard E.
2012-01-01
An icing analysis process involving the loose coupling of OVERFLOW-RCAS for rotor performance prediction and with LEWICE3D for thermal analysis and ice accretion is applied to a model-scale rotor for validation. The process offers high-fidelity rotor analysis for the noniced and iced rotor performance evaluation that accounts for the interaction of nonlinear aerodynamics with blade elastic deformations. Ice accumulation prediction also involves loosely coupled data exchanges between OVERFLOW and LEWICE3D to produce accurate ice shapes. Validation of the process uses data collected in the 1993 icing test involving Sikorsky's Powered Force Model. Non-iced and iced rotor performance predictions are compared to experimental measurements as are predicted ice shapes.
Hand ultrasound: a high-fidelity simulation of lung sliding.
Shokoohi, Hamid; Boniface, Keith
2012-09-01
Simulation training has been effectively used to integrate didactic knowledge and technical skills in emergency and critical care medicine. In this article, we introduce a novel model of simulating lung ultrasound and the features of lung sliding and pneumothorax by performing a hand ultrasound. The simulation model involves scanning the palmar aspect of the hand to create normal lung sliding in varying modes of scanning and to mimic ultrasound features of pneumothorax, including "stratosphere/barcode sign" and "lung point." The simple, reproducible, and readily available simulation model we describe demonstrates a high-fidelity simulation surrogate that can be used to rapidly illustrate the signs of normal and abnormal lung sliding at the bedside. © 2012 by the Society for Academic Emergency Medicine.
NASA Astrophysics Data System (ADS)
Warlick, Kent M.
While the addition of short fiber to 3D printed articles has increased structural performance, ultimate gains will only be realized through the introduction of continuous reinforcement placed along pre-planned load paths. Most additive manufacturing research focusing on the addition of continuous reinforcement has revolved around utilization of a prefrabricated composite filament or a fiber and matrix mixed within a hot end prior to deposition on a printing surface such that conventional extrusion based FDM can be applied. Although stronger 3D printed parts can be made in this manner, high quality homogenous composites are not possible due to fiber dominated regions, matrix dominated regions, and voids present between adjacent filaments. Conventional composite manufacturing processes are much better at creating homogeneous composites; however, the layer by layer approach in which they are made is inhibiting the alignment of reinforcement with loads. Automated Fiber Placement techniques utilize in plane bending deformation of the tow to facilitate tow steering. Due to buckling fibers on the inner radius of curves, manufacturers recommend a minimum curvature for path placement with this technique. A method called continuous tow shearing has shown promise to enable the placement of tows in complex patterns without tow buckling, spreading, and separation inherent in conventional forms of automated reinforcement positioning. The current work employs fused deposition modeling hardware and the continuous tow shearing technique to manufacture high quality fiber reinforced composites with high positional fidelity, varying continuous reinforcement orientations within a layer, and plastic elements incorporated enabling the ultimate gains in structural performance possible. A mechanical system combining concepts of additive manufacturing with fiber placement via filament winding was developed. Paths with and without tension inherent in filament winding were analyzed through microscopy in order to examine best and worst case scenarios. High quality fiber reinforced composite materials, in terms of low void content, high fiber volume fractions and homogeneity in microstructure, were manufactured in both of these scenarios. In order to improve fidelity and quality in fiber path transition regions, a forced air cooling manifold was designed, printed, and implemented into the current system. To better understand the composite performance that results from varying pertinent manufacturing parameters, the effect of feed rate, hot end temperature, forced air cooling, and deposition surface (polypropylene and previously deposited glass polypropylene commingled tow) on interply performance, microstructure, and positional fidelity were analyzed. Interply performance, in terms of average maximum load and average peel strength, was quantified through a t-peel test of the bonding quality between two surfaces. With use of forced air cooling, minor decreases in average peel strength were present due to a reduction in tow deposition temperature which was found to be the variable most indicative of performance. Average maximum load was comparable between the forced air cooled and non-air cooled samples. Microstructure was evaluated through characterization of composite area, void content, and flash percentage. Low void contents mostly between five to seven percent were attained. Further reduction of this void content to two percent is possible through higher processing temperatures; however, reduced composite area, low average peel strength performance, and the presence of smoke during manufacturing implied thermal degradation of the polypropylene matrix occurred in these samples with higher processing temperatures. Positional fidelity was measured through calculations of shear angle, shift width, and error of a predefined path. While positional fidelity variation was low with a polypropylene deposition surface, forced air cooling is necessary to achieve fidelity on top of an already deposited tow surface as evident by the fifty-six percent reduction in error tolerance profile achieved. Lastly, proof of concept articles with unique fiber paths and neat plastic elements incorporated were produced to demonstrate fiber placement along pre-planned load paths and the ability to achieve greater structural efficiency through the use of less material. The results show that high positional fidelity and high quality composites can be produced through the use of the tow shearing technique implemented in the developed mechanical system. The implementation of forced air cooling was critical in achieving fidelity and quality in transition regions. Alignment of continuous reinforcement with pre-planned load paths was demonstrated in the proof of concept article with varying fiber orientations within a layer. Combining fused deposition modeling of plastic with the placement of continuous reinforcement enabled a honeycomb composite to be produced with higher specific properties than traditional composites. Thus, the current system demonstrated a greater capability of achieving ultimate gains in structural performance than previously possible.
NASA Astrophysics Data System (ADS)
Sperber, K. R.; Palmer, T. N.
1996-11-01
The interannual variability of rainfall over the Indian subcontinent, the African Sahel, and the Nordeste region of Brazil have been evaluated in 32 models for the period 1979-88 as part of the Atmospheric Model Intercomparison Project (AMIP). The interannual variations of Nordeste rainfall are the most readily captured, owing to the intimate link with Pacific and Atlantic sea surface temperatures. The precipitation variations over India and the Sahel are less well simulated. Additionally, an Indian monsoon wind shear index was calculated for each model. Evaluation of the interannual variability of a wind shear index over the summer monsoon region indicates that the models exhibit greater fidelity in capturing the large-scale dynamic fluctuations than the regional-scale rainfall variations. A rainfall/SST teleconnection quality control was used to objectively stratify model performance. Skill scores improved for those models that qualitatively simulated the observed rainfall/El Niño- Southern Oscillation SST correlation pattern. This subset of models also had a rainfall climatology that was in better agreement with observations, indicating a link between systematic model error and the ability to simulate interannual variations.A suite of six European Centre for Medium-Range Weather Forecasts (ECMWF) AMIP runs (differing only in their initial conditions) have also been examined. As observed, all-India rainfall was enhanced in 1988 relative to 1987 in each of these realizations. All-India rainfall variability during other years showed little or no predictability, possibly due to internal chaotic dynamics associated with intraseasonal monsoon fluctuations and/or unpredictable land surface process interactions. The interannual variations of Nordeste rainfall were best represented. The State University of New York at Albany/National Center for Atmospheric Research Genesis model was run in five initial condition realizations. In this model, the Nordeste rainfall variability was also best reproduced. However, for all regions the skill was less than that of the ECMWF model.The relationships of the all-India and Sahel rainfall/SST teleconnections with horizontal resolution, convection scheme closure, and numerics have been evaluated. Models with resolution T42 performed more poorly than lower-resolution models. The higher resolution models were predominantly spectral. At low resolution, spectral versus gridpoint numerics performed with nearly equal verisimilitude. At low resolution, moisture convergence closure was slightly more preferable than other convective closure techniques. At high resolution, the models that used moisture convergence closure performed very poorly, suggesting that moisture convergence may be problematic for models with horizontal resolution T42.
PHYSICS OF ECLIPSING BINARIES. II. TOWARD THE INCREASED MODEL FIDELITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prša, A.; Conroy, K. E.; Horvat, M.
The precision of photometric and spectroscopic observations has been systematically improved in the last decade, mostly thanks to space-borne photometric missions and ground-based spectrographs dedicated to finding exoplanets. The field of eclipsing binary stars strongly benefited from this development. Eclipsing binaries serve as critical tools for determining fundamental stellar properties (masses, radii, temperatures, and luminosities), yet the models are not capable of reproducing observed data well, either because of the missing physics or because of insufficient precision. This led to a predicament where radiative and dynamical effects, insofar buried in noise, started showing up routinely in the data, but weremore » not accounted for in the models. PHOEBE (PHysics Of Eclipsing BinariEs; http://phoebe-project.org) is an open source modeling code for computing theoretical light and radial velocity curves that addresses both problems by incorporating missing physics and by increasing the computational fidelity. In particular, we discuss triangulation as a superior surface discretization algorithm, meshing of rotating single stars, light travel time effects, advanced phase computation, volume conservation in eccentric orbits, and improved computation of local intensity across the stellar surfaces that includes the photon-weighted mode, the enhanced limb darkening treatment, the better reflection treatment, and Doppler boosting. Here we present the concepts on which PHOEBE is built and proofs of concept that demonstrate the increased model fidelity.« less
Creation of a High-fidelity, Low-cost Pediatric Skull Fracture Ultrasound Phantom.
Soucy, Zachary P; Mills, Lisa; Rose, John S; Kelley, Kenneth; Ramirez, Francisco; Kuppermann, Nathan
2015-08-01
Over the past decade, point-of-care ultrasound has become a common tool used for both procedures and diagnosis. Developing high-fidelity phantoms is critical for training in new and novel point-of-care ultrasound applications. Detecting skull fractures on ultrasound imaging in the younger-than-2-year-old patient is an emerging area of point-of-care ultrasound research. Identifying a skull fracture on ultrasound imaging in this age group requires knowledge of the appearance and location of sutures to distinguish them from fractures. There are currently no commercially available pediatric skull fracture models. We outline a novel approach to building a cost-effective, simple, high-fidelity pediatric skull fracture phantom to meet a unique training requirement. © 2015 by the American Institute of Ultrasound in Medicine.
Noise in gene expression is coupled to growth rate.
Keren, Leeat; van Dijk, David; Weingarten-Gabbay, Shira; Davidi, Dan; Jona, Ghil; Weinberger, Adina; Milo, Ron; Segal, Eran
2015-12-01
Genetically identical cells exposed to the same environment display variability in gene expression (noise), with important consequences for the fidelity of cellular regulation and biological function. Although population average gene expression is tightly coupled to growth rate, the effects of changes in environmental conditions on expression variability are not known. Here, we measure the single-cell expression distributions of approximately 900 Saccharomyces cerevisiae promoters across four environmental conditions using flow cytometry, and find that gene expression noise is tightly coupled to the environment and is generally higher at lower growth rates. Nutrient-poor conditions, which support lower growth rates, display elevated levels of noise for most promoters, regardless of their specific expression values. We present a simple model of noise in expression that results from having an asynchronous population, with cells at different cell-cycle stages, and with different partitioning of the cells between the stages at different growth rates. This model predicts non-monotonic global changes in noise at different growth rates as well as overall higher variability in expression for cell-cycle-regulated genes in all conditions. The consistency between this model and our data, as well as with noise measurements of cells growing in a chemostat at well-defined growth rates, suggests that cell-cycle heterogeneity is a major contributor to gene expression noise. Finally, we identify gene and promoter features that play a role in gene expression noise across conditions. Our results show the existence of growth-related global changes in gene expression noise and suggest their potential phenotypic implications. © 2015 Keren et al.; Published by Cold Spring Harbor Laboratory Press.
Noise in gene expression is coupled to growth rate
Keren, Leeat; van Dijk, David; Weingarten-Gabbay, Shira; Davidi, Dan; Jona, Ghil; Weinberger, Adina; Milo, Ron; Segal, Eran
2015-01-01
Genetically identical cells exposed to the same environment display variability in gene expression (noise), with important consequences for the fidelity of cellular regulation and biological function. Although population average gene expression is tightly coupled to growth rate, the effects of changes in environmental conditions on expression variability are not known. Here, we measure the single-cell expression distributions of approximately 900 Saccharomyces cerevisiae promoters across four environmental conditions using flow cytometry, and find that gene expression noise is tightly coupled to the environment and is generally higher at lower growth rates. Nutrient-poor conditions, which support lower growth rates, display elevated levels of noise for most promoters, regardless of their specific expression values. We present a simple model of noise in expression that results from having an asynchronous population, with cells at different cell-cycle stages, and with different partitioning of the cells between the stages at different growth rates. This model predicts non-monotonic global changes in noise at different growth rates as well as overall higher variability in expression for cell-cycle–regulated genes in all conditions. The consistency between this model and our data, as well as with noise measurements of cells growing in a chemostat at well-defined growth rates, suggests that cell-cycle heterogeneity is a major contributor to gene expression noise. Finally, we identify gene and promoter features that play a role in gene expression noise across conditions. Our results show the existence of growth-related global changes in gene expression noise and suggest their potential phenotypic implications. PMID:26355006
Steen, Valerie A.; Powell, Abby N.
2012-01-01
We examined wetland selection by the Black Tern (Chlidonias niger), a species that breeds primarily in the prairie pothole region, has experienced population declines, and is difficult to manage because of low site fidelity. To characterize its selection of wetlands in this region, we surveyed 589 wetlands throughout North and South Dakota. We documented breeding at 5% and foraging at 17% of wetlands. We created predictive habitat models with a machine-learning algorithm, Random Forests, to explore the relative role of local wetland characteristics and those of the surrounding landscape and to evaluate which characteristics were important to predicting breeding versus foraging. We also examined area-dependent wetland selection while addressing the passive sampling bias by replacing occurrence of terns in the models with an index of density. Local wetland variables were more important than landscape variables in predictions of occurrence of breeding and foraging. Wetland size was more important to prediction of foraging than of breeding locations, while floating matted vegetation was more important to prediction of breeding than of foraging locations. The amount of seasonal wetland in the landscape was the only landscape variable important to prediction of both foraging and breeding. Models based on a density index indicated that wetland selection by foraging terns may be more area dependent than that by breeding terns. Our study provides some of the first evidence for differential breeding and foraging wetland selection by Black Terns and for a more limited role of landscape effects and area sensitivity than has been previously shown.
NASA Technical Reports Server (NTRS)
Pineda, Evan J.; Fassin, Marek; Bednarcyk, Brett A.; Reese, Stefanie; Simon, Jaan-Willem
2017-01-01
Three different multiscale models, based on the method of cells (generalized and high fidelity) micromechanics models were developed and used to predict the elastic properties of C/C-SiC composites. In particular, the following multiscale modeling strategies were employed: Concurrent multiscale modeling of all phases using the generalized method of cells, synergistic (two-way coupling in space) multiscale modeling with the generalized method of cells, and hierarchical (one-way coupling in space) multiscale modeling with the high fidelity generalized method of cells. The three models are validated against data from a hierarchical multiscale finite element model in the literature for a repeating unit cell of C/C-SiC. Furthermore, the multiscale models are used in conjunction with classical lamination theory to predict the stiffness of C/C-SiC plates manufactured via a wet filament winding and liquid silicon infiltration process recently developed by the German Aerospace Institute.
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, Shao-Sheng R.; Allen Christopher S.
2010-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.
Nucleobase but not Sugar Fidelity is Maintained in the Sabin I RNA-Dependent RNA Polymerase.
Liu, Xinran; Musser, Derek M; Lee, Cheri A; Yang, Xiaorong; Arnold, Jamie J; Cameron, Craig E; Boehr, David D
2015-10-26
The Sabin I poliovirus live, attenuated vaccine strain encodes for four amino acid changes (i.e., D53N, Y73H, K250E, and T362I) in the RNA-dependent RNA polymerase (RdRp). We have previously shown that the T362I substitution leads to a lower fidelity RdRp, and viruses encoding this variant are attenuated in a mouse model of poliovirus. Given these results, it was surprising that the nucleotide incorporation rate and nucleobase fidelity of the Sabin I RdRp is similar to that of wild-type enzyme, although the Sabin I RdRp is less selective against nucleotides with modified sugar groups. We suggest that the other Sabin amino acid changes (i.e., D53N, Y73H, K250E) help to re-establish nucleotide incorporation rates and nucleotide discrimination near wild-type levels, which may be a requirement for the propagation of the virus and its efficacy as a vaccine strain. These results also suggest that the nucleobase fidelity of the Sabin I RdRp likely does not contribute to viral attenuation.
Comparison of Low-Thrust Control Laws for Application in Planetocentric Space
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Sjauw, Waldy K.; Smith, David A.
2014-01-01
Recent interest at NASA for the application of solar electric propulsion for the transfer of significant payloads in cislunar space has led to the development of high-fidelity simulations of such missions. With such transfers involving transfer times on the order of months, simulation time can be significant. In the past, the examination of such missions typically began with the use of lower-fidelity trajectory optimization tools such as SEPSPOT to develop and tune guidance laws which delivered optimal or near- optimal trajectories, where optimal is generally defined as minimizing propellant expenditure or time of flight. The transfer of these solutions to a high-fidelity simulation is typically an iterative process whereby the initial solution may nearly, but not precisely, meet mission objectives. Further tuning of the guidance algorithm is typically necessary when accounting for high-fidelity perturbations such as those due to more detailed gravity models, secondary-body effects, solar radiation pressure, etc. While trajectory optimization is a useful method for determining optimal performance metrics, algorithms which deliver nearly optimal performance with minimal tuning are an attractive alternative.
High-Fidelity Buckling Analysis of Composite Cylinders Using the STAGS Finite Element Code
NASA Technical Reports Server (NTRS)
Hilburger, Mark W.
2014-01-01
Results from previous shell buckling studies are presented that illustrate some of the unique and powerful capabilities in the STAGS finite element analysis code that have made it an indispensable tool in structures research at NASA over the past few decades. In particular, prototypical results from the development and validation of high-fidelity buckling simulations are presented for several unstiffened thin-walled compression-loaded graphite-epoxy cylindrical shells along with a discussion on the specific methods and user-defined subroutines in STAGS that are used to carry out the high-fidelity simulations. These simulations accurately account for the effects of geometric shell-wall imperfections, shell-wall thickness variations, local shell-wall ply-gaps associated with the fabrication process, shell-end geometric imperfections, nonuniform applied end loads, and elastic boundary conditions. The analysis procedure uses a combination of nonlinear quasi-static and transient dynamic solution algorithms to predict the prebuckling and unstable collapse response characteristics of the cylinders. Finally, the use of high-fidelity models in the development of analysis-based shell-buckling knockdown (design) factors is demonstrated.
Best Design for Multidimensional Computerized Adaptive Testing With the Bifactor Model
Seo, Dong Gi; Weiss, David J.
2015-01-01
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm (MCAT) with a bifactor model using simulated data. Four item selection methods in MCAT were examined for three bifactor pattern designs using two multidimensional item response theory models. To compare MCAT item selection and estimation methods, a fixed test length was used. The Ds-optimality item selection improved θ estimates with respect to a general factor, and either D- or A-optimality improved estimates of the group factors in three bifactor pattern designs under two multidimensional item response theory models. The MCAT model without a guessing parameter functioned better than the MCAT model with a guessing parameter. The MAP (maximum a posteriori) estimation method provided more accurate θ estimates than the EAP (expected a posteriori) method under most conditions, and MAP showed lower observed standard errors than EAP under most conditions, except for a general factor condition using Ds-optimality item selection. PMID:29795848
Spatial distirbution of Antarctic mass flux due to iceberg transport
NASA Astrophysics Data System (ADS)
Comeau, Darin; Hunke, Elizabeth; Turner, Adrian
Under a changing climate that sees amplified warming in the polar regions, the stability of the West Antarctic ice sheet and its impact on sea level rise is of great importance. Icebergs are at the interface of the land-ice, ocean, and sea ice systems, and represent approximately half of the mass flux from the Antarctic ice sheet to the ocean. Calved icebergs transport freshwater away from the coast and exchange heat with the ocean, thereby affecting stratification and circulation, with subsequent indirect thermodynamic effects to the sea ice system. Icebergs also dynamically interact with surrounding sea ice pack, as well as serving as nutrient sources for biogeochemical activity. The spatial pattern of these fluxes transported from the continent to the ocean is generally poorly represented in current global climate models. We are implementing an iceberg model into the new Accelerated Climate Model for Energy (ACME) within the MPAS-Seaice model, which uses a variable resolution, unstructured grid framework. This capability will allow for full coupling with the land ice model to inform calving fluxes, and the ocean model for freshwater and heat exchange, giving a complete representation of the iceberg lifecycle and increasing the fidelity of ACME southern cryosphere simulations.
NASA Technical Reports Server (NTRS)
Connolly, Joseph W.; Kopasakis, George; Carlson, Jan-Renee; Woolwine, Kyle
2015-01-01
This paper covers the development of an integrated nonlinear dynamic model for a variable cycle turbofan engine, supersonic inlet, and convergent-divergent nozzle that can be integrated with an aeroelastic vehicle model to create an overall Aero-Propulso-Servo-Elastic (APSE) modeling tool. The primary focus of this study is to provide a means to capture relevant thrust dynamics of a full supersonic propulsion system by using relatively simple quasi-one dimensional computational fluid dynamics (CFD) methods that will allow for accurate control algorithm development and capture the key aspects of the thrust to feed into an APSE model. Previously, propulsion system component models have been developed and are used for this study of the fully integrated propulsion system. An overview of the methodology is presented for the modeling of each propulsion component, with a focus on its associated coupling for the overall model. To conduct APSE studies the de- scribed dynamic propulsion system model is integrated into a high fidelity CFD model of the full vehicle capable of conducting aero-elastic studies. Dynamic thrust analysis for the quasi-one dimensional dynamic propulsion system model is presented along with an initial three dimensional flow field model of the engine integrated into a supersonic commercial transport.
NASA Technical Reports Server (NTRS)
Connolly, Joe; Carlson, Jan-Renee; Kopasakis, George; Woolwine, Kyle
2015-01-01
This paper covers the development of an integrated nonlinear dynamic model for a variable cycle turbofan engine, supersonic inlet, and convergent-divergent nozzle that can be integrated with an aeroelastic vehicle model to create an overall Aero-Propulso-Servo-Elastic (APSE) modeling tool. The primary focus of this study is to provide a means to capture relevant thrust dynamics of a full supersonic propulsion system by using relatively simple quasi-one dimensional computational fluid dynamics (CFD) methods that will allow for accurate control algorithm development and capture the key aspects of the thrust to feed into an APSE model. Previously, propulsion system component models have been developed and are used for this study of the fully integrated propulsion system. An overview of the methodology is presented for the modeling of each propulsion component, with a focus on its associated coupling for the overall model. To conduct APSE studies the described dynamic propulsion system model is integrated into a high fidelity CFD model of the full vehicle capable of conducting aero-elastic studies. Dynamic thrust analysis for the quasi-one dimensional dynamic propulsion system model is presented along with an initial three dimensional flow field model of the engine integrated into a supersonic commercial transport.
Further Investigations of Gravity Modeling on Surface-Interacting Vehicle Simulations
NASA Technical Reports Server (NTRS)
Madden, Michael M.
2009-01-01
A vehicle simulation is "surface-interacting" if the state of the vehicle (position, velocity, and acceleration) relative to the surface is important. Surface-interacting simulations perform ascent, entry, descent, landing, surface travel, or atmospheric flight. The dynamics of surface-interacting simulations are influenced by the modeling of gravity. Gravity is the sum of gravitation and the centrifugal acceleration due to the world s rotation. Both components are functions of position relative to the world s center and that position for a given set of geodetic coordinates (latitude, longitude, and altitude) depends on the world model (world shape and dynamics). Thus, gravity fidelity depends on the fidelities of the gravitation model and the world model and on the interaction of the gravitation and world model. A surface-interacting simulation cannot treat the gravitation separately from the world model. This paper examines the actual performance of different pairs of world and gravitation models (or direct gravity models) on the travel of a subsonic civil transport in level flight under various starting conditions.
Comparison of the UAF Ionosphere Model with Incoherent-Scatter Radar Data
NASA Astrophysics Data System (ADS)
McAllister, J.; Maurits, S.; Kulchitsky, A.; Watkins, B.
2004-12-01
The UAF Eulerian Parallel Polar Ionosphere Model (UAF EPPIM) is a first-principles three-dimensional time-dependent representation of the northern polar ionosphere (>50 degrees north latitude). The model routinely generates short-term (~2 hours) ionospheric forecasts in real-time. It may also be run in post-processing/batch mode for specific time periods, including long-term (multi-year) simulations. The model code has been extensively validated (~100k comparisons/model year) against ionosonde foF2 data during quiet and moderate solar activity in 2002-2004 with reasonable fidelity (typical relative RMS 10-20% for summer daytime, 30-50% winter nighttime). However, ionosonde data is frequently not available during geomagnetic disturbances. The objective of the work reported here is to compare model outputs with available incoherent-scatter radar data during the storm period of October-November 2003. Model accuracy is examined for this period and compared to model performance during geomagnetically quiet and moderate circumstances. Possible improvements are suggested which are likely to boost model fidelity during storm conditions.
Gravity Modeling Effects on Surface-Interacting Vehicles in Supersonic Flight
NASA Technical Reports Server (NTRS)
Madden, Michael M.
2010-01-01
A vehicle simulation is "surface-interacting" if the state of the vehicle (position, velocity, and acceleration) relative to the surface is important. Surface-interacting simulations per-form ascent, entry, descent, landing, surface travel, or atmospheric flight. The dynamics of surface-interacting simulations are influenced by the modeling of gravity. Gravity is the sum of gravitation and the centrifugal acceleration due to the world s rotation. Both components are functions of position relative to the world s center and that position for a given set of geodetic coordinates (latitude, longitude, and altitude) depends on the world model (world shape and dynamics). Thus, gravity fidelity depends on the fidelities of the gravitation model and the world model and on the interaction of these two models. A surface-interacting simulation cannot treat gravitation separately from the world model. This paper examines the actual performance of different pairs of world and gravitation models (or direct gravity models) on the travel of a supersonic aircraft in level flight under various start-ing conditions.
Wind Farm Flow Modeling using an Input-Output Reduced-Order Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter
Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used tomore » extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.« less
NASA Astrophysics Data System (ADS)
Henderson, Laura S.; Subbarao, Kamesh
2017-12-01
This work presents a case wherein the selection of models when producing synthetic light curves affects the estimation of the size of unresolved space objects. Through this case, "inverse crime" (using the same model for the generation of synthetic data and data inversion), is illustrated. This is done by using two models to produce the synthetic light curve and later invert it. It is shown here that the choice of model indeed affects the estimation of the shape/size parameters. When a higher fidelity model (henceforth the one that results in the smallest error residuals after the crime is committed) is used to both create, and invert the light curve model the estimates of the shape/size parameters are significantly better than those obtained when a lower fidelity model (in comparison) is implemented for the estimation. It is therefore of utmost importance to consider the choice of models when producing synthetic data, which later will be inverted, as the results might be misleadingly optimistic.
Validation of a Low-Thrust Mission Design Tool Using Operational Navigation Software
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Knittel, Jeremy M.; Williams, Ken; Stanbridge, Dale; Ellison, Donald H.
2017-01-01
Design of flight trajectories for missions employing solar electric propulsion requires a suitably high-fidelity design tool. In this work, the Evolutionary Mission Trajectory Generator (EMTG) is presented as a medium-high fidelity design tool that is suitable for mission proposals. EMTG is validated against the high-heritage deep-space navigation tool MIRAGE, demonstrating both the accuracy of EMTG's model and an operational mission design and navigation procedure using both tools. The validation is performed using a benchmark mission to the Jupiter Trojans.
Deterministic quantum teleportation of atomic qubits.
Barrett, M D; Chiaverini, J; Schaetz, T; Britton, J; Itano, W M; Jost, J D; Knill, E; Langer, C; Leibfried, D; Ozeri, R; Wineland, D J
2004-06-17
Quantum teleportation provides a means to transport quantum information efficiently from one location to another, without the physical transfer of the associated quantum-information carrier. This is achieved by using the non-local correlations of previously distributed, entangled quantum bits (qubits). Teleportation is expected to play an integral role in quantum communication and quantum computation. Previous experimental demonstrations have been implemented with optical systems that used both discrete and continuous variables, and with liquid-state nuclear magnetic resonance. Here we report unconditional teleportation of massive particle qubits using atomic (9Be+) ions confined in a segmented ion trap, which aids individual qubit addressing. We achieve an average fidelity of 78 per cent, which exceeds the fidelity of any protocol that does not use entanglement. This demonstration is also important because it incorporates most of the techniques necessary for scalable quantum information processing in an ion-trap system.
Shinn-Cunningham, Barbara
2017-10-17
This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. The results from neuroscience and psychoacoustics are reviewed. In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. http://cred.pubs.asha.org/article.aspx?articleid=2601617.