Sample records for analytical modeling effort

  1. Analytical and experimental study of control effort associated with model reference adaptive control

    NASA Technical Reports Server (NTRS)

    Messer, R. S.; Haftka, R. T.; Cudney, H. H.

    1992-01-01

    Numerical simulation results presently obtained for the performance of model reference adaptive control (MRAC) are experimentally verified, with a view to accounting for differences between the plant and the reference model after the control function has been brought to bear. MRAC is both experimentally and analytically applied to a single-degree-of-freedom system, as well as analytically to a MIMO system having controlled differences between the reference model and the plant. The control effort is noted to be sensitive to differences between the plant and the reference model.

  2. Application of Interface Technology in Progressive Failure Analysis of Composite Panels

    NASA Technical Reports Server (NTRS)

    Sleight, D. W.; Lotts, C. G.

    2002-01-01

    A progressive failure analysis capability using interface technology is presented. The capability has been implemented in the COMET-AR finite element analysis code developed at the NASA Langley Research Center and is demonstrated on composite panels. The composite panels are analyzed for damage initiation and propagation from initial loading to final failure using a progressive failure analysis capability that includes both geometric and material nonlinearities. Progressive failure analyses are performed on conventional models and interface technology models of the composite panels. Analytical results and the computational effort of the analyses are compared for the conventional models and interface technology models. The analytical results predicted with the interface technology models are in good correlation with the analytical results using the conventional models, while significantly reducing the computational effort.

  3. Manufacturing data analytics using a virtual factory representation.

    PubMed

    Jain, Sanjay; Shao, Guodong; Shin, Seung-Jun

    2017-01-01

    Large manufacturers have been using simulation to support decision-making for design and production. However, with the advancement of technologies and the emergence of big data, simulation can be utilised to perform and support data analytics for associated performance gains. This requires not only significant model development expertise, but also huge data collection and analysis efforts. This paper presents an approach within the frameworks of Design Science Research Methodology and prototyping to address the challenge of increasing the use of modelling, simulation and data analytics in manufacturing via reduction of the development effort. The use of manufacturing simulation models is presented as data analytics applications themselves and for supporting other data analytics applications by serving as data generators and as a tool for validation. The virtual factory concept is presented as the vehicle for manufacturing modelling and simulation. Virtual factory goes beyond traditional simulation models of factories to include multi-resolution modelling capabilities and thus allowing analysis at varying levels of detail. A path is proposed for implementation of the virtual factory concept that builds on developments in technologies and standards. A virtual machine prototype is provided as a demonstration of the use of a virtual representation for manufacturing data analytics.

  4. The cost of model reference adaptive control - Analysis, experiments, and optimization

    NASA Technical Reports Server (NTRS)

    Messer, R. S.; Haftka, R. T.; Cudney, H. H.

    1993-01-01

    In this paper the performance of Model Reference Adaptive Control (MRAC) is studied in numerical simulations and verified experimentally with the objective of understanding how differences between the plant and the reference model affect the control effort. MRAC is applied analytically and experimentally to a single degree of freedom system and analytically to a MIMO system with controlled differences between the model and the plant. It is shown that the control effort is sensitive to differences between the plant and the reference model. The effects of increased damping in the reference model are considered, and it is shown that requiring the controller to provide increased damping actually decreases the required control effort when differences between the plant and reference model exist. This result is useful because one of the first attempts to counteract the increased control effort due to differences between the plant and reference model might be to require less damping, however, this would actually increase the control effort. Optimization of weighting matrices is shown to help reduce the increase in required control effort. However, it was found that eventually the optimization resulted in a design that required an extremely high sampling rate for successful realization.

  5. Test of a potential link between analytic and nonanalytic category learning and automatic, effortful processing.

    PubMed

    Tracy, J I; Pinsk, M; Helverson, J; Urban, G; Dietz, T; Smith, D J

    2001-08-01

    The link between automatic and effortful processing and nonanalytic and analytic category learning was evaluated in a sample of 29 college undergraduates using declarative memory, semantic category search, and pseudoword categorization tasks. Automatic and effortful processing measures were hypothesized to be associated with nonanalytic and analytic categorization, respectively. Results suggested that contrary to prediction strong criterion-attribute (analytic) responding on the pseudoword categorization task was associated with strong automatic, implicit memory encoding of frequency-of-occurrence information. Data are discussed in terms of the possibility that criterion-attribute category knowledge, once established, may be expressed with few attentional resources. The data indicate that attention resource requirements, even for the same stimuli and task, vary depending on the category rule system utilized. Also, the automaticity emerging from familiarity with analytic category exemplars is very different from the automaticity arising from extensive practice on a semantic category search task. The data do not support any simple mapping of analytic and nonanalytic forms of category learning onto the automatic and effortful processing dichotomy and challenge simple models of brain asymmetries for such procedures. Copyright 2001 Academic Press.

  6. MODELING TO EVOLVE UNDERSTANDING OF THE SHALLOW GROUND WATER FLOW SYSTEM BENEATH THE LIZZIE RESEARCH SITE, NC

    EPA Science Inventory

    The purpose of the modeling effort presented here is to evolve a conceptual model of ground-water flow at the Lizzie, NC research site using analytic solutions and field observations. The resulting analytic element parameterization of boundary conditions, aquifer transmissivitie...

  7. Composite material bend-twist coupling for wind turbine blade applications

    NASA Astrophysics Data System (ADS)

    Walsh, Justin M.

    Current efforts in wind turbine blade design seek to employ bend-twist coupling of composite materials for passive power control by twisting blades to feather. Past efforts in this area of study have proved to be problematic, especially in formulation of the bend-twist coupling coefficient alpha. Kevlar/epoxy, carbon/epoxy and glass/epoxy specimens were manufactured to study bend-twist coupling, from which numerical and analytical models could be verified. Finite element analysis was implemented to evaluate fiber orientation and material property effects on coupling magnitude. An analytical/empirical model was then derived to describe numerical results and serve as a replacement for the commonly used coupling coefficient alpha. Through the results from numerical and analytical models, a foundation for aeroelastic design of wind turbines blades utilizing biased composite materials is provided.

  8. Thermodynamic analysis and subscale modeling of space-based orbit transfer vehicle cryogenic propellant resupply

    NASA Technical Reports Server (NTRS)

    Defelice, David M.; Aydelott, John C.

    1987-01-01

    The resupply of the cryogenic propellants is an enabling technology for spacebased orbit transfer vehicles. As part of the NASA Lewis ongoing efforts in microgravity fluid management, thermodynamic analysis and subscale modeling techniques were developed to support an on-orbit test bed for cryogenic fluid management technologies. Analytical results have shown that subscale experimental modeling of liquid resupply can be used to validate analytical models when the appropriate target temperature is selected to relate the model to its prototype system. Further analyses were used to develop a thermodynamic model of the tank chilldown process which is required prior to the no-vent fill operation. These efforts were incorporated into two FORTRAN programs which were used to present preliminary analyticl results.

  9. The “2T” ion-electron semi-analytic shock solution for code-comparison with xRAGE: A report for FY16

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, Jim Michael

    2016-10-05

    This report documents an effort to generate the semi-analytic "2T" ion-electron shock solution developed in the paper by Masser, Wohlbier, and Lowrie, and the initial attempts to understand how to use this solution as a code-verification tool for one of LANL's ASC codes, xRAGE. Most of the work so far has gone into generating the semi-analytic solution. Considerable effort will go into understanding how to write the xRAGE input deck that both matches the boundary conditions imposed by the solution, and also what physics models must be implemented within the semi-analytic solution itself to match the model assumptions inherit withinmore » xRAGE. Therefore, most of this report focuses on deriving the equations for the semi-analytic 1D-planar time-independent "2T" ion-electron shock solution, and is written in a style that is intended to provide clear guidance for anyone writing their own solver.« less

  10. Physiological and Anatomical Visual Analytics (PAVA) Background

    EPA Pesticide Factsheets

    The need to efficiently analyze human chemical disposition data from in vivo studies or in silico PBPK modeling efforts, and to see complex disposition data in a logical manner, has created a unique opportunity for visual analytics applid to PAD.

  11. Sharing the Data along with the Responsibility: Examining an Analytic Scale-Based Model for Assessing School Climate.

    ERIC Educational Resources Information Center

    Shindler, John; Taylor, Clint; Cadenas, Herminia; Jones, Albert

    This study was a pilot effort to examine the efficacy of an analytic trait scale school climate assessment instrument and democratic change system in two urban high schools. Pilot study results indicate that the instrument shows promising soundness in that it exhibited high levels of validity and reliability. In addition, the analytic trait format…

  12. Evaluation of air traffic control models and simulations.

    DOT National Transportation Integrated Search

    1971-06-01

    Approximately two hundred reports were identified as describing Air Traffic Control (ATC) modeling and simulation efforts. Of these, about ninety analytical and simulation models dealing with virtually all aspects of ATC were formally evaluated. The ...

  13. Toward a Visualization-Supported Workflow for Cyber Alert Management using Threat Models and Human-Centered Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franklin, Lyndsey; Pirrung, Megan A.; Blaha, Leslie M.

    Cyber network analysts follow complex processes in their investigations of potential threats to their network. Much research is dedicated to providing automated tool support in the effort to make their tasks more efficient, accurate, and timely. This tool support comes in a variety of implementations from machine learning algorithms that monitor streams of data to visual analytic environments for exploring rich and noisy data sets. Cyber analysts, however, often speak of a need for tools which help them merge the data they already have and help them establish appropriate baselines against which to compare potential anomalies. Furthermore, existing threat modelsmore » that cyber analysts regularly use to structure their investigation are not often leveraged in support tools. We report on our work with cyber analysts to understand they analytic process and how one such model, the MITRE ATT&CK Matrix [32], is used to structure their analytic thinking. We present our efforts to map specific data needed by analysts into the threat model to inform our eventual visualization designs. We examine data mapping for gaps where the threat model is under-supported by either data or tools. We discuss these gaps as potential design spaces for future research efforts. We also discuss the design of a prototype tool that combines machine-learning and visualization components to support cyber analysts working with this threat model.« less

  14. Design and Analysis of a Preconcentrator for the ChemLab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WONG,CHUNGNIN C.; FLEMMING,JEB H.; MANGINELL,RONALD P.

    2000-07-17

    Preconcentration is a critical analytical procedure when designing a microsystem for trace chemical detection, because it can purify a sample mixture and boost the small analyte concentration to a much higher level allowing a better analysis. This paper describes the development of a micro-fabricated planar preconcentrator for the {mu}ChemLab{trademark} at Sandia. To guide the design, an analytical model to predict the analyte transport, adsorption and resorption process in the preconcentrator has been developed. Experiments have also been conducted to analyze the adsorption and resorption process and to validate the model. This combined effort of modeling, simulation, and testing has ledmore » us to build a reliable, efficient preconcentrator with good performance.« less

  15. Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y; Glascoe, L

    The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less

  16. Analytical prediction of digital signal crosstalk of FCC

    NASA Technical Reports Server (NTRS)

    Belleisle, A. P.

    1972-01-01

    The results are presented of study effort whose aim was the development of accurate means of analyzing and predicting signal cross-talk in multi-wire digital data cables. A complete analytical model is developed n + 1 wire systems of uniform transmission lines with arbitrary linear boundary conditions. In addition, a minimum set of parameter measurements required for the application of the model are presented. Comparisons between cross-talk predicted by this model and actual measured cross-talk are shown for a six conductor ribbon cable.

  17. Highlights of Transient Plume Impingement Model Validation and Applications

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael

    2011-01-01

    This paper describes highlights of an ongoing validation effort conducted to assess the viability of applying a set of analytic point source transient free molecule equations to model behavior ranging from molecular effusion to rocket plumes. The validation effort includes encouraging comparisons to both steady and transient studies involving experimental data and direct simulation Monte Carlo results. Finally, this model is applied to describe features of two exotic transient scenarios involving NASA Goddard Space Flight Center satellite programs.

  18. Taxometric and Factor Analytic Models of Anxiety Sensitivity among Youth: Exploring the Latent Structure of Anxiety Psychopathology Vulnerability

    ERIC Educational Resources Information Center

    Bernstein, Amit; Zvolensky, Michael J.; Stewart, Sherry; Comeau, Nancy

    2007-01-01

    This study represents an effort to better understand the latent structure of anxiety sensitivity (AS), a well-established affect-sensitivity individual difference factor, among youth by employing taxometric and factor analytic approaches in an integrative manner. Taxometric analyses indicated that AS, as indexed by the Child Anxiety Sensitivity…

  19. A model to inform management actions as a response to chytridiomycosis-associated decline

    USGS Publications Warehouse

    Converse, Sarah J.; Bailey, Larissa L.; Mosher, Brittany A.; Funk, W. Chris; Gerber, Brian D.; Muths, Erin L.

    2017-01-01

    Decision-analytic models provide forecasts of how systems of interest will respond to management. These models can be parameterized using empirical data, but sometimes require information elicited from experts. When evaluating the effects of disease in species translocation programs, expert judgment is likely to play a role because complete empirical information will rarely be available. We illustrate development of a decision-analytic model built to inform decision-making regarding translocations and other management actions for the boreal toad (Anaxyrus boreas boreas), a species with declines linked to chytridiomycosis caused by Batrachochytrium dendrobatidis (Bd). Using the model, we explored the management implications of major uncertainties in this system, including whether there is a genetic basis for resistance to pathogenic infection by Bd, how translocation can best be implemented, and the effectiveness of efforts to reduce the spread of Bd. Our modeling exercise suggested that while selection for resistance to pathogenic infectionDecision-analytic models provide forecasts of how systems of interest will respond to management. These models can be parameterized using empirical data, but sometimes require information elicited from experts. When evaluating the effects of disease in species translocation programs, expert judgment is likely to play a role because complete empirical information will rarely be available. We illustrate development of a decision-analytic model built to inform decision-making regarding translocations and other management actions for the boreal toad (Anaxyrus boreas boreas), a species with declines linked to chytridiomycosis caused by Batrachochytrium dendrobatidis (Bd). Using the model, we explored the management implications of major uncertainties in this system, including whether there is a genetic basis for resistance to pathogenic infection by Bd, how translocation can best be implemented, and the effectiveness of efforts to reduce the spread of Bd. Our modeling exercise suggested that while selection for resistance to pathogenic infection by Bd could increase numbers of sites occupied by toads, and translocations could increase the rate of toad recovery, efforts to reduce the spread of Bd may have little effect. We emphasize the need to continue developing and parameterizing models necessary to assess management actions for combating chytridiomycosis-associated declines. by Bd could increase numbers of sites occupied by toads, and translocations could increase the rate of toad recovery, efforts to reduce the spread of Bd may have little effect. We emphasize the need to continue developing and parameterizing models necessary to assess management actions for combating chytridiomycosis-associated declines.

  20. Theoretical basis to measure the impact of short-lasting control of an infectious disease on the epidemic peak

    PubMed Central

    2011-01-01

    Background While many pandemic preparedness plans have promoted disease control effort to lower and delay an epidemic peak, analytical methods for determining the required control effort and making statistical inferences have yet to be sought. As a first step to address this issue, we present a theoretical basis on which to assess the impact of an early intervention on the epidemic peak, employing a simple epidemic model. Methods We focus on estimating the impact of an early control effort (e.g. unsuccessful containment), assuming that the transmission rate abruptly increases when control is discontinued. We provide analytical expressions for magnitude and time of the epidemic peak, employing approximate logistic and logarithmic-form solutions for the latter. Empirical influenza data (H1N1-2009) in Japan are analyzed to estimate the effect of the summer holiday period in lowering and delaying the peak in 2009. Results Our model estimates that the epidemic peak of the 2009 pandemic was delayed for 21 days due to summer holiday. Decline in peak appears to be a nonlinear function of control-associated reduction in the reproduction number. Peak delay is shown to critically depend on the fraction of initially immune individuals. Conclusions The proposed modeling approaches offer methodological avenues to assess empirical data and to objectively estimate required control effort to lower and delay an epidemic peak. Analytical findings support a critical need to conduct population-wide serological survey as a prior requirement for estimating the time of peak. PMID:21269441

  1. Survey of Fire Modeling Efforts with Application to Transportation Vehicles

    DOT National Transportation Integrated Search

    1981-07-01

    This report presents the results of a survey of analytical fire models with applications pertinent to fires in the compartments of transportation vehicles; a brief discussion of the background of fire phenomena and an overview of various fire modelin...

  2. Taxometric and Factor Analytic Models of Anxiety Sensitivity: Integrating Approaches to Latent Structural Research

    ERIC Educational Resources Information Center

    Bernstein, Amit; Zvolensky, Michael J.; Norton, Peter J.; Schmidt, Norman B.; Taylor, Steven; Forsyth, John P.; Lewis, Sarah F.; Feldner, Matthew T.; Leen-Feldner, Ellen W.; Stewart, Sherry H.; Cox, Brian

    2007-01-01

    This study represents an effort to better understand the latent structure of anxiety sensitivity (AS), as indexed by the 16-item Anxiety Sensitivity Index (ASI; S. Reiss, R. A. Peterson, M. Gursky, & R. J. McNally, 1986), by using taxometric and factor-analytic approaches in an integrative manner. Taxometric analyses indicated that AS has a…

  3. Two-dimensional numerical simulation of a Stirling engine heat exchanger

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir B.; Tew, Roy C.; Dudenhoefer, James E.

    1989-01-01

    The first phase of an effort to develop multidimensional models of Stirling engine components is described; the ultimate goal is to model an entire engine working space. More specifically, parallel plate and tubular heat exchanger models with emphasis on the central part of the channel (i.e., ignoring hydrodynamic and thermal end effects) are described. The model assumes: laminar, incompressible flow with constant thermophysical properties. In addition, a constant axial temperature gradient is imposed. The governing equations, describing the model, were solved using Crank-Nicloson finite-difference scheme. Model predictions were compared with analytical solutions for oscillating/reversing flow and heat transfer in order to check numerical accuracy. Excellent agreement was obtained for the model predictions with analytical solutions available for both flow in circular tubes and between parallel plates. Also the heat transfer computational results are in good agreement with the heat transfer analytical results for parallel plates.

  4. Compelling evidence for Lucky Survivor and gas phase protonation: the unified MALDI analyte protonation mechanism.

    PubMed

    Jaskolla, Thorsten W; Karas, Michael

    2011-06-01

    This work experimentally verifies and proves the two long since postulated matrix-assisted laser desorption/ionization (MALDI) analyte protonation pathways known as the Lucky Survivor and the gas phase protonation model. Experimental differentiation between the predicted mechanisms becomes possible by the use of deuterated matrix esters as MALDI matrices, which are stable under typical sample preparation conditions and generate deuteronated reagent ions, including the deuterated and deuteronated free matrix acid, only upon laser irradiation in the MALDI process. While the generation of deuteronated analyte ions proves the gas phase protonation model, the detection of protonated analytes by application of deuterated matrix compounds without acidic hydrogens proves the survival of analytes precharged from solution in accordance with the predictions from the Lucky Survivor model. The observed ratio of the two analyte ionization processes depends on the applied experimental parameters as well as the nature of analyte and matrix. Increasing laser fluences and lower matrix proton affinities favor gas phase protonation, whereas more quantitative analyte protonation in solution and intramolecular ion stabilization leads to more Lucky Survivors. The presented results allow for a deeper understanding of the fundamental processes causing analyte ionization in MALDI and may alleviate future efforts for increasing the analyte ion yield.

  5. Exporting the Colombian Model Comparing Law Enfocement Stratregies Towards Security and Stability Operations in Colombia and Mexico

    DTIC Science & Technology

    2014-06-01

    Belen Boville provides a 7 Nathan Leites and Charles Wolf, Rebellion and Authority: An Analytic Essay ...Narcocorridos are so influential that the Mexican government has banned such music on public radio.189 Despite the government’s efforts, the lyrics ...and Wolf Jr., Charles. Rebellion and Authority: An Analytic Essay on Insurgent Conflicts. Chicago: Markham Publishing Company, 1970. Livingstone

  6. Terry Turbopump Analytical Modeling Efforts in Fiscal Year 2016 ? Progress Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Douglas; Ross, Kyle; Cardoni, Jeffrey N

    This document details the Fiscal Year 2016 modeling efforts to define the true operating limitations (margins) of the Terry turbopump systems used in the nuclear industry for Milestone 3 (full-scale component experiments) and Milestone 4 (Terry turbopump basic science experiments) experiments. The overall multinational-sponsored program creates the technical basis to: (1) reduce and defer additional utility costs, (2) simplify plant operations, and (3) provide a better understanding of the true margin which could reduce overall risk of operations.

  7. Analytical study of the heat loss attenuation by clothing on thermal manikins under radiative heat loads.

    PubMed

    Den Hartog, Emiel A; Havenith, George

    2010-01-01

    For wearers of protective clothing in radiation environments there are no quantitative guidelines available for the effect of a radiative heat load on heat exchange. Under the European Union funded project ThermProtect an analytical effort was defined to address the issue of radiative heat load while wearing protective clothing. As within the ThermProtect project much information has become available from thermal manikin experiments in thermal radiation environments, these sets of experimental data are used to verify the analytical approach. The analytical approach provided a good prediction of the heat loss in the manikin experiments, 95% of the variance was explained by the model. The model has not yet been validated at high radiative heat loads and neglects some physical properties of the radiation emissivity. Still, the analytical approach provides a pragmatic approach and may be useful for practical implementation in protective clothing standards for moderate thermal radiation environments.

  8. Predictive analytics and child protection: constraints and opportunities.

    PubMed

    Russell, Jesse

    2015-08-01

    This paper considers how predictive analytics might inform, assist, and improve decision making in child protection. Predictive analytics represents recent increases in data quantity and data diversity, along with advances in computing technology. While the use of data and statistical modeling is not new to child protection decision making, its use in child protection is experiencing growth, and efforts to leverage predictive analytics for better decision-making in child protection are increasing. Past experiences, constraints and opportunities are reviewed. For predictive analytics to make the most impact on child protection practice and outcomes, it must embrace established criteria of validity, equity, reliability, and usefulness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Estimating the Health Effects of Greenhouse Gas Mitigation Strategies: Addressing Parametric, Model, and Valuation Challenges

    PubMed Central

    Hess, Jeremy J.; Ebi, Kristie L.; Markandya, Anil; Balbus, John M.; Wilkinson, Paul; Haines, Andy; Chalabi, Zaid

    2014-01-01

    Background: Policy decisions regarding climate change mitigation are increasingly incorporating the beneficial and adverse health impacts of greenhouse gas emission reduction strategies. Studies of such co-benefits and co-harms involve modeling approaches requiring a range of analytic decisions that affect the model output. Objective: Our objective was to assess analytic decisions regarding model framework, structure, choice of parameters, and handling of uncertainty when modeling health co-benefits, and to make recommendations for improvements that could increase policy uptake. Methods: We describe the assumptions and analytic decisions underlying models of mitigation co-benefits, examining their effects on modeling outputs, and consider tools for quantifying uncertainty. Discussion: There is considerable variation in approaches to valuation metrics, discounting methods, uncertainty characterization and propagation, and assessment of low-probability/high-impact events. There is also variable inclusion of adverse impacts of mitigation policies, and limited extension of modeling domains to include implementation considerations. Going forward, co-benefits modeling efforts should be carried out in collaboration with policy makers; these efforts should include the full range of positive and negative impacts and critical uncertainties, as well as a range of discount rates, and should explicitly characterize uncertainty. We make recommendations to improve the rigor and consistency of modeling of health co-benefits. Conclusion: Modeling health co-benefits requires systematic consideration of the suitability of model assumptions, of what should be included and excluded from the model framework, and how uncertainty should be treated. Increased attention to these and other analytic decisions has the potential to increase the policy relevance and application of co-benefits modeling studies, potentially helping policy makers to maximize mitigation potential while simultaneously improving health. Citation: Remais JV, Hess JJ, Ebi KL, Markandya A, Balbus JM, Wilkinson P, Haines A, Chalabi Z. 2014. Estimating the health effects of greenhouse gas mitigation strategies: addressing parametric, model, and valuation challenges. Environ Health Perspect 122:447–455; http://dx.doi.org/10.1289/ehp.1306744 PMID:24583270

  10. Examining hydrogen transitions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plotkin, S. E.; Energy Systems

    2007-03-01

    This report describes the results of an effort to identify key analytic issues associated with modeling a transition to hydrogen as a fuel for light duty vehicles, and using insights gained from this effort to suggest ways to improve ongoing modeling efforts. The study reported on here examined multiple hydrogen scenarios reported in the literature, identified modeling issues associated with those scenario analyses, and examined three DOE-sponsored hydrogen transition models in the context of those modeling issues. The three hydrogen transition models are HyTrans (contractor: Oak Ridge National Laboratory), MARKAL/DOE* (Brookhaven National Laboratory), and NEMS-H2 (OnLocation, Inc). The goals ofmore » these models are (1) to help DOE improve its R&D effort by identifying key technology and other roadblocks to a transition and testing its technical program goals to determine whether they are likely to lead to the market success of hydrogen technologies, (2) to evaluate alternative policies to promote a transition, and (3) to estimate the costs and benefits of alternative pathways to hydrogen development.« less

  11. Foreign body impact event damage formation in composite structures

    NASA Technical Reports Server (NTRS)

    Bucinell, Ronald B.

    1994-01-01

    This report discusses a methodology that can be used to assess the effect of foreign body impacts on composite structural integrity. The described effort focuses on modeling the effect of a central impact on a 5 3/4 inch filament wound test article. The discussion will commence with details of the material modeling that was used to establish the input properties for the analytical model. This discussion is followed by an overview of the impact assessment methodology. The progress on this effort to date is reviewed along with a discussion of tasks that have yet to be completed.

  12. Spatially-explicit models of global tree density.

    PubMed

    Glick, Henry B; Bettigole, Charlie; Maynard, Daniel S; Covey, Kristofer R; Smith, Jeffrey R; Crowther, Thomas W

    2016-08-16

    Remote sensing and geographic analysis of woody vegetation provide means of evaluating the distribution of natural resources, patterns of biodiversity and ecosystem structure, and socio-economic drivers of resource utilization. While these methods bring geographic datasets with global coverage into our day-to-day analytic spheres, many of the studies that rely on these strategies do not capitalize on the extensive collection of existing field data. We present the methods and maps associated with the first spatially-explicit models of global tree density, which relied on over 420,000 forest inventory field plots from around the world. This research is the result of a collaborative effort engaging over 20 scientists and institutions, and capitalizes on an array of analytical strategies. Our spatial data products offer precise estimates of the number of trees at global and biome scales, but should not be used for local-level estimation. At larger scales, these datasets can contribute valuable insight into resource management, ecological modelling efforts, and the quantification of ecosystem services.

  13. Particle Engulfment and Pushing By Solidifying Interfaces

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The study of particle behavior at solid/liquid interfaces (SLI s) is at the center of the Particle Engulfment and Pushing (PEP) research program. Interactions of particles with SLI s have been of interest since the 1960 s, starting with geological observations, i.e., frost heaving. Ever since, this field of research has become significant to such diverse areas as metal matrix composite materials, fabrication of superconductors, and inclusion control in steels. The PEP research effort is geared towards understanding the fundamental physics of the interaction between particles and a planar SLI. Experimental work including 1-g and mu-g experiments accompany the development of analytical and numerical models. The experimental work comprised of substantial groundwork with aluminum (Al) and zinc (Zn) matrices containing spherical zirconia particles, mu-g experiments with metallic Al matrices and the use of transparent organic metal-analogue materials. The modeling efforts have grown from the initial steady-state analytical model to dynamic models, accounting for the initial acceleration of a particle at rest by an advancing SLI. To gain a more comprehensive understanding, numerical models were developed to account for the influence of the thermal and solutal field. Current efforts are geared towards coupling the diffusive 2-D front tracking model with a fluid flow model to account for differences in the physics of interaction between 1-g and -g environments. A significant amount of this theoretical investigation has been and is being performed by co-investigators at NASA MSFC.

  14. Shuttle Communications and Tracking Systems Modeling and TDRSS Link Simulations Studies

    NASA Technical Reports Server (NTRS)

    Chie, C. M.; Dessouky, K.; Lindsey, W. C.; Tsang, C. S.; Su, Y. T.

    1985-01-01

    An analytical simulation package (LinCsim) which allows the analytical verification of data transmission performance through TDRSS satellites was modified. The work involved the modeling of the user transponder, TDRS, TDRS ground terminal, and link dynamics for forward and return links based on the TDRSS performance specifications (4) and the critical design reviews. The scope of this effort has recently been expanded to include the effects of radio frequency interference (RFI) on the bit error rate (BER) performance of the S-band return links. The RFI environment and the modified TDRSS satellite and ground station hardware are being modeled in accordance with their description in the applicable documents.

  15. Analysis of structural dynamic data from Skylab. Volume 1: Technical discussion

    NASA Technical Reports Server (NTRS)

    Demchak, L.; Harcrow, H.

    1976-01-01

    A compendium of Skylab structural dynamics analytical and test programs is presented. These programs are assessed to identify lessons learned from the structural dynamic prediction effort and to provide guidelines for future analysts and program managers of complex spacecraft systems. It is a synopsis of the structural dynamic effort performed under the Skylab Integration contract and specifically covers the development, utilization, and correlation of Skylab Dynamic Orbital Models.

  16. LOX/hydrocarbon rocket engine analytical design methodology development and validation. Volume 1: Executive summary and technical narrative

    NASA Technical Reports Server (NTRS)

    Pieper, Jerry L.; Walker, Richard E.

    1993-01-01

    During the past three decades, an enormous amount of resources were expended in the design and development of Liquid Oxygen/Hydrocarbon and Hydrogen (LOX/HC and LOX/H2) rocket engines. A significant portion of these resources were used to develop and demonstrate the performance and combustion stability for each new engine. During these efforts, many analytical and empirical models were developed that characterize design parameters and combustion processes that influence performance and stability. Many of these models are suitable as design tools, but they have not been assembled into an industry-wide usable analytical design methodology. The objective of this program was to assemble existing performance and combustion stability models into a usable methodology capable of producing high performing and stable LOX/hydrocarbon and LOX/hydrogen propellant booster engines.

  17. Impact of a Flexible Evaluation System on Effort and Timing of Study

    ERIC Educational Resources Information Center

    Pacharn, Parunchana; Bay, Darlene; Felton, Sandra

    2012-01-01

    This paper examines results of a flexible grading system that allows each student to influence the weight allocated to each performance measure. We construct a stylized model to determine students' optimal responses. Our analytical model predicts different optimal strategies for students with varying academic abilities: a frontloading strategy for…

  18. Remediation Evaluation Model for Chlorinated Solvents (REMChlor)

    EPA Science Inventory

    A new analytical solution has been developed for simulating the transient effects of groundwater source and plume remediation. This development was performed as part of a Strategic Environmental Research and Development Program (SERDP) research project, which was a joint effort ...

  19. A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity.

    PubMed

    Yao, Yijun; Verginelli, Iason; Suuberg, Eric M

    2017-05-01

    In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.

  20. Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review

    NASA Astrophysics Data System (ADS)

    Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal

    2017-08-01

    Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.

  1. Space mapping method for the design of passive shields

    NASA Astrophysics Data System (ADS)

    Sergeant, Peter; Dupré, Luc; Melkebeek, Jan

    2006-04-01

    The aim of the paper is to find the optimal geometry of a passive shield for the reduction of the magnetic stray field of an axisymmetric induction heater. For the optimization, a space mapping algorithm is used that requires two models. The first is an accurate model with a high computational effort as it contains finite element models. The second is less accurate, but it has a low computational effort as it uses an analytical model: the shield is replaced by a number of mutually coupled coils. The currents in the shield are found by solving an electrical circuit. Space mapping combines both models to obtain the optimal passive shield fast and accurately. The presented optimization technique is compared with gradient, simplex, and genetic algorithms.

  2. Model of separation performance of bilinear gradients in scanning format counter-flow gradient electrofocusing techniques.

    PubMed

    Shameli, Seyed Mostafa; Glawdel, Tomasz; Ren, Carolyn L

    2015-03-01

    Counter-flow gradient electrofocusing allows the simultaneous concentration and separation of analytes by generating a gradient in the total velocity of each analyte that is the sum of its electrophoretic velocity and the bulk counter-flow velocity. In the scanning format, the bulk counter-flow velocity is varying with time so that a number of analytes with large differences in electrophoretic mobility can be sequentially focused and passed by a single detection point. Studies have shown that nonlinear (such as a bilinear) velocity gradients along the separation channel can improve both peak capacity and separation resolution simultaneously, which cannot be realized by using a single linear gradient. Developing an effective separation system based on the scanning counter-flow nonlinear gradient electrofocusing technique usually requires extensive experimental and numerical efforts, which can be reduced significantly with the help of analytical models for design optimization and guiding experimental studies. Therefore, this study focuses on developing an analytical model to evaluate the separation performance of scanning counter-flow bilinear gradient electrofocusing methods. In particular, this model allows a bilinear gradient and a scanning rate to be optimized for the desired separation performance. The results based on this model indicate that any bilinear gradient provides a higher separation resolution (up to 100%) compared to the linear case. This model is validated by numerical studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Development and application of dynamic simulations of a subsonic wind tunnel

    NASA Technical Reports Server (NTRS)

    Szuch, J. R.; Cole, G. L.; Seidel, R. C.; Arpasi, D. J.

    1986-01-01

    Efforts are currently underway at NASA Lewis to improve and expand ground test facilities and to develop supporting technologies to meet anticipated aeropropulsion research needs. Many of these efforts have been focused on a proposed rehabilitation of the Altitude Wind Tunnel (AWT). In order to insure a technically sound design, an AWT modeling program (both analytical and physical) was initiated to provide input to the AWT final design process. This paper describes the approach taken to develop analytical, dynamic computer simulations of the AWT, and the use of these simulations as test-beds for: (1) predicting the dynamic response characteristics of the AWT, and (2) evaluating proposed AWT control concepts. Plans for developing a portable, real-time simulator for the AWT facility are also described.

  4. Bridging the Gap between Human Judgment and Automated Reasoning in Predictive Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Riensche, Roderick M.; Unwin, Stephen D.

    2010-06-07

    Events occur daily that impact the health, security and sustainable growth of our society. If we are to address the challenges that emerge from these events, anticipatory reasoning has to become an everyday activity. Strong advances have been made in using integrated modeling for analysis and decision making. However, a wider impact of predictive analytics is currently hindered by the lack of systematic methods for integrating predictive inferences from computer models with human judgment. In this paper, we present a predictive analytics approach that supports anticipatory analysis and decision-making through a concerted reasoning effort that interleaves human judgment and automatedmore » inferences. We describe a systematic methodology for integrating modeling algorithms within a serious gaming environment in which role-playing by human agents provides updates to model nodes and the ensuing model outcomes in turn influence the behavior of the human players. The approach ensures a strong functional partnership between human players and computer models while maintaining a high degree of independence and greatly facilitating the connection between model and game structures.« less

  5. Quantifying risks with exact analytical solutions of derivative pricing distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Liu, Jing; Wang, Erkang; Wang, Jin

    2017-04-01

    Derivative (i.e. option) pricing is essential for modern financial instrumentations. Despite of the previous efforts, the exact analytical forms of the derivative pricing distributions are still challenging to obtain. In this study, we established a quantitative framework using path integrals to obtain the exact analytical solutions of the statistical distribution for bond and bond option pricing for the Vasicek model. We discuss the importance of statistical fluctuations away from the expected option pricing characterized by the distribution tail and their associations to value at risk (VaR). The framework established here is general and can be applied to other financial derivatives for quantifying the underlying statistical distributions.

  6. A Review of Research on Impulsive Loading of Marine Composites

    NASA Astrophysics Data System (ADS)

    Porfiri, Maurizio; Gupta, Nikhil

    Impulsive loading conditions, such as those produced by blast waves, are being increasingly recognized as relevant in marine applications. Significant research efforts are directed towards understanding the impulsive loading response of traditional naval materials, such as aluminum and steel, and advanced composites, such as laminates and sandwich structures. Several analytical studies are directed towards establishing predictive models for structural response and failure of marine structures under blast loading. In addition, experimental research efforts are focused on characterizing structural response to blast loading. The aim of this review is to provide a general overview of the state of the art on analytical and experimental studies in this field that can serve as a guideline for future research directions. Reported studies cover the Office of Naval Research-Solid Mechanics Program sponsored research along with other worldwide research efforts of relevance to marine applications. These studies have contributed to developing a fundamental knowledge of the mechanics of advanced materials subjected to impulsive loading, which is of interest to all Department of Defense branches.

  7. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1988-01-01

    The initial effort was concentrated on developing the quasi-analytical approach for two-dimensional transonic flow. To keep the problem computationally efficient and straightforward, only the two-dimensional flow was considered and the problem was modeled using the transonic small perturbation equation.

  8. Human performance modeling for system of systems analytics: combat performance-shaping factors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawton, Craig R.; Miller, Dwight Peter

    The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate theymore » would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.« less

  9. ULTRASONIC STUDIES OF THE FUNDAMENTAL MECHANISMS OF RECRYSTALLIZATION AND SINTERING OF METALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TURNER, JOSEPH A.

    2005-11-30

    The purpose of this project was to develop a fundamental understanding of the interaction of an ultrasonic wave with complex media, with specific emphases on recrystallization and sintering of metals. A combined analytical, numerical, and experimental research program was implemented. Theoretical models of elastic wave propagation through these complex materials were developed using stochastic wave field techniques. The numerical simulations focused on finite element wave propagation solutions through complex media. The experimental efforts were focused on corroboration of the models developed and on the development of new experimental techniques. The analytical and numerical research allows the experimental results to bemore » interpreted quantitatively.« less

  10. Active controls: A look at analytical methods and associated tools

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Adams, W. M., Jr.; Mukhopadhyay, V.; Tiffany, S. H.; Abel, I.

    1984-01-01

    A review of analytical methods and associated tools for active controls analysis and design problems is presented. Approaches employed to develop mathematical models suitable for control system analysis and/or design are discussed. Significant efforts have been expended to develop tools to generate the models from the standpoint of control system designers' needs and develop the tools necessary to analyze and design active control systems. Representative examples of these tools are discussed. Examples where results from the methods and tools have been compared with experimental data are also presented. Finally, a perspective on future trends in analysis and design methods is presented.

  11. Pressurization of cryogens - A review of current technology and its applicability to low-gravity conditions

    NASA Technical Reports Server (NTRS)

    Van Dresar, N. T.

    1992-01-01

    A review of technology, history, and current status for pressurized expulsion of cryogenic tankage is presented. Use of tank pressurization to expel cryogenic fluid will continue to be studied for future spacecraft applications over a range of operating conditions in the low-gravity environment. The review examines experimental test results and analytical model development for quiescent and agitated conditions in normal-gravity followed by a discussion of pressurization and expulsion in low-gravity. Validated, 1-D, finite difference codes exist for the prediction of pressurant mass requirements within the range of quiescent normal-gravity test data. To date, the effects of liquid sloshing have been characterized by tests in normal-gravity, but analytical models capable of predicting pressurant gas requirements remain unavailable. Efforts to develop multidimensional modeling capabilities in both normal and low-gravity have recently occurred. Low-gravity cryogenic fluid transfer experiments are needed to obtain low-gravity pressurized expulsion data. This data is required to guide analytical model development and to verify code performance.

  12. Pressurization of cryogens: A review of current technology and its applicability to low-gravity conditions

    NASA Technical Reports Server (NTRS)

    Vandresar, N. T.

    1992-01-01

    A review of technology, history, and current status for pressurized expulsion of cryogenic tankage is presented. Use of tank pressurization to expel cryogenic fluids will continue to be studied for future spacecraft applications over a range of operating conditions in the low-gravity environment. The review examines experimental test results and analytical model development for quiescent and agitated conditions in normal-gravity, followed by a discussion of pressurization and expulsion in low-gravity. Validated, 1-D, finite difference codes exist for the prediction of pressurant mass requirements within the range of quiescent normal-gravity test data. To date, the effects of liquid sloshing have been characterized by tests in normal-gravity, but analytical models capable of predicting pressurant gas requirements remain unavailable. Efforts to develop multidimensional modeling capabilities in both normal and low-gravity have recently occurred. Low-gravity cryogenic fluid transfer experiments are needed to obtain low-gravity pressurized expulsion data. This data is required to guide analytical model development and to verify code performance.

  13. A geometric ultraviolet-B radiation transfer model applied to vegetation canopies

    Treesearch

    Wei Gao; Richard H. Grant; Gordon M. Heisler; James R. Slusser

    2002-01-01

    The decrease in stratospheric ozone (O3) has prompted continued efforts to assess the potential damage to plant and animal life due to enhanced levels of solar ultraviolet (UV)-B (280-320 nm) radiation. The objective of this study was to develop and evaluate an analytical model to simulate the UV-B irradiance loading on horizontal below- canopy...

  14. A Comparative Analysis of Computer End-User Support in the Air Force and Civilian Organizations

    DTIC Science & Technology

    1991-12-01

    This explanation implies a further stratification of end users based on the specific tasks they perform, a new model of application combinations, and a...its support efforts to meet the needs of its end-uiser clientele iore closely. 79 INTEGRATED .9 VERBAL ANALYTIC Figure 14. Test Model of Applications ...The IC Model : IEM, Canada. ...............19 Proliferation of ICs ... ............... 20 Services ... ..................... 21 IC States

  15. Demographics of reintroduced populations: estimation, modeling, and decision analysis

    USGS Publications Warehouse

    Converse, Sarah J.; Moore, Clinton T.; Armstrong, Doug P.

    2013-01-01

    Reintroduction can be necessary for recovering populations of threatened species. However, the success of reintroduction efforts has been poorer than many biologists and managers would hope. To increase the benefits gained from reintroduction, management decision making should be couched within formal decision-analytic frameworks. Decision analysis is a structured process for informing decision making that recognizes that all decisions have a set of components—objectives, alternative management actions, predictive models, and optimization methods—that can be decomposed, analyzed, and recomposed to facilitate optimal, transparent decisions. Because the outcome of interest in reintroduction efforts is typically population viability or related metrics, models used in decision analysis efforts for reintroductions will need to include population models. In this special section of the Journal of Wildlife Management, we highlight examples of the construction and use of models for informing management decisions in reintroduced populations. In this introductory contribution, we review concepts in decision analysis, population modeling for analysis of decisions in reintroduction settings, and future directions. Increased use of formal decision analysis, including adaptive management, has great potential to inform reintroduction efforts. Adopting these practices will require close collaboration among managers, decision analysts, population modelers, and field biologists.

  16. Two-dimensional numerical simulation of a Stirling engine heat exchanger

    NASA Technical Reports Server (NTRS)

    Ibrahim, Mounir; Tew, Roy C.; Dudenhoefer, James E.

    1989-01-01

    The first phase of an effort to develop multidimensional models of Stirling engine components is described. The ultimate goal is to model an entire engine working space. Parallel plate and tubular heat exchanger models are described, with emphasis on the central part of the channel (i.e., ignoring hydrodynamic and thermal end effects). The model assumes laminar, incompressible flow with constant thermophysical properties. In addition, a constant axial temperature gradient is imposed. The governing equations describing the model have been solved using the Crack-Nicloson finite-difference scheme. Model predictions are compared with analytical solutions for oscillating/reversing flow and heat transfer in order to check numerical accuracy. Excellent agreement is obtained for flow both in circular tubes and between parallel plates. The computational heat transfer results are in good agreement with the analytical heat transfer results for parallel plates.

  17. Elementary Students' Effortful Control and Academic Achievement: The Mediating Role of Teacher-Student Relationship Quality

    PubMed Central

    Hernández, Maciel M.; Valiente, Carlos; Eisenberg, Nancy; Berger, Rebecca H.; Spinrad, Tracy L.; VanSchyndel, Sarah K.; Silva, Kassondra M.; Southworth, Jody; Thompson, Marilyn S.

    2017-01-01

    This study evaluated the association between effortful control in kindergarten and academic achievement one year later (N = 301), and whether teacher–student closeness and conflict in kindergarten mediated the association. Parents, teachers, and observers reported on children's effortful control, and teachers reported on their perceived levels of closeness and conflict with students. Students completed the passage comprehension and applied problems subtests of the Woodcock–Johnson tests of achievement, as well as a behavioral measure of effortful control. Analytical models predicting academic achievement were estimated using a structural equation model framework. Effortful control positively predicted academic achievement even when controlling for prior achievement and other covariates. Mediation hypotheses were tested in a separate model; effortful control positively predicted teacher–student closeness and strongly, negatively predicted teacher–student conflict. Teacher–student closeness and effortful control, but not teacher–student conflict, had small, positive associations with academic achievement. Effortful control also indirectly predicted higher academic achievement through its positive effect on teacher–student closeness and via its positive relation to early academic achievement. The findings suggest that teacher–student closeness is one mechanism by which effortful control is associated with academic achievement. Effortful control was also a consistent predictor of academic achievement, beyond prior achievement levels and controlling for teacher–student closeness and conflict, with implications for intervention programs on fostering regulation and achievement concurrently. PMID:28684888

  18. Elementary Students' Effortful Control and Academic Achievement: The Mediating Role of Teacher-Student Relationship Quality.

    PubMed

    Hernández, Maciel M; Valiente, Carlos; Eisenberg, Nancy; Berger, Rebecca H; Spinrad, Tracy L; VanSchyndel, Sarah K; Silva, Kassondra M; Southworth, Jody; Thompson, Marilyn S

    This study evaluated the association between effortful control in kindergarten and academic achievement one year later ( N = 301), and whether teacher-student closeness and conflict in kindergarten mediated the association. Parents, teachers, and observers reported on children's effortful control, and teachers reported on their perceived levels of closeness and conflict with students. Students completed the passage comprehension and applied problems subtests of the Woodcock-Johnson tests of achievement, as well as a behavioral measure of effortful control. Analytical models predicting academic achievement were estimated using a structural equation model framework. Effortful control positively predicted academic achievement even when controlling for prior achievement and other covariates. Mediation hypotheses were tested in a separate model; effortful control positively predicted teacher-student closeness and strongly, negatively predicted teacher-student conflict. Teacher-student closeness and effortful control, but not teacher-student conflict, had small, positive associations with academic achievement. Effortful control also indirectly predicted higher academic achievement through its positive effect on teacher-student closeness and via its positive relation to early academic achievement. The findings suggest that teacher-student closeness is one mechanism by which effortful control is associated with academic achievement. Effortful control was also a consistent predictor of academic achievement, beyond prior achievement levels and controlling for teacher-student closeness and conflict, with implications for intervention programs on fostering regulation and achievement concurrently.

  19. Curved Thermopiezoelectric Shell Structures Modeled by Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Lee, Ho-Jun

    2000-01-01

    "Smart" structures composed of piezoelectric materials may significantly improve the performance of aeropropulsion systems through a variety of vibration, noise, and shape-control applications. The development of analytical models for piezoelectric smart structures is an ongoing, in-house activity at the NASA Glenn Research Center at Lewis Field focused toward the experimental characterization of these materials. Research efforts have been directed toward developing analytical models that account for the coupled mechanical, electrical, and thermal response of piezoelectric composite materials. Current work revolves around implementing thermal effects into a curvilinear-shell finite element code. This enhances capabilities to analyze curved structures and to account for coupling effects arising from thermal effects and the curved geometry. The current analytical model implements a unique mixed multi-field laminate theory to improve computational efficiency without sacrificing accuracy. The mechanics can model both the sensory and active behavior of piezoelectric composite shell structures. Finite element equations are being implemented for an eight-node curvilinear shell element, and numerical studies are being conducted to demonstrate capabilities to model the response of curved piezoelectric composite structures (see the figure).

  20. A sample preparation method for recovering suppressed analyte ions in MALDI TOF MS.

    PubMed

    Lou, Xianwen; de Waal, Bas F M; Milroy, Lech-Gustav; van Dongen, Joost L J

    2015-05-01

    In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS), analyte signals can be substantially suppressed by other compounds in the sample. In this technical note, we describe a modified thin-layer sample preparation method that significantly reduces the analyte suppression effect (ASE). In our method, analytes are deposited on top of the surface of matrix preloaded on the MALDI plate. To prevent embedding of analyte into the matrix crystals, the sample solution were prepared without matrix and efforts were taken not to re-dissolve the preloaded matrix. The results with model mixtures of peptides, synthetic polymers and lipids show that detection of analyte ions, which were completely suppressed using the conventional dried-droplet method, could be effectively recovered by using our method. Our findings suggest that the incorporation of analytes in the matrix crystals has an important contributory effect on ASE. By reducing ASE, our method should be useful for the direct MALDI MS analysis of multicomponent mixtures. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Analytical and physical modeling program for the NASA Lewis Research Center's Altitude Wind Tunnel (AWT)

    NASA Technical Reports Server (NTRS)

    Abbott, J. M.; Deidrich, J. H.; Groeneweg, J. F.; Povinelli, L. A.; Reid, L.; Reinmann, J. J.; Szuch, J. R.

    1985-01-01

    An effort is currently underway at the NASA Lewis Research Center to rehabilitate and extend the capabilities of the Altitude Wind Tunnel (AWT). This extended capability will include a maximum test section Mach number of about 0.9 at an altitude of 55,000 ft and a -20 F stagnation temperature (octagonal test section, 20 ft across the flats). In addition, the AWT will include an icing and acoustic research capability. In order to insure a technically sound design, an AWT modeling program (both analytical and physical) was initiated to provide essential input to the AWT final design process. This paper describes the modeling program, including the rationale and criteria used in program definition, and presents some early program results.

  2. Novel Multidisciplinary Models Assess the Capabilities of Smart Structures to Manage Vibration, Sound, and Thermal Distortion in Aeropropulsion Components

    NASA Technical Reports Server (NTRS)

    Saravanos, Dimitris A.

    1997-01-01

    The development of aeropropulsion components that incorporate "smart" composite laminates with embedded piezoelectric actuators and sensors is expected to ameliorate critical problems in advanced aircraft engines related to vibration, noise emission, and thermal stability. To facilitate the analytical needs of this effort, the NASA Lewis Research Center has developed mechanics and multidisciplinary computational models to analyze the complicated electromechanical behavior of realistic smart-structure configurations operating in combined mechanical, thermal, and acoustic environments. The models have been developed to accommodate the particular geometries, environments, and technical challenges encountered in advanced aircraft engines, yet their unique analytical features are expected to facilitate application of this new technology in a variety of commercial applications.

  3. Review and assessment of the HOST turbine heat transfer program

    NASA Technical Reports Server (NTRS)

    Gladden, Herbert J.

    1988-01-01

    The objectives of the HOST Turbine Heat Transfer subproject were to obtain a better understanding of the physics of the aerothermodynamic phenomena occurring in high-performance gas turbine engines and to assess and improve the analytical methods used to predict the fluid dynamics and heat transfer phenomena. At the time the HOST project was initiated, an across-the-board improvement in turbine design technology was needed. Therefore, a building-block approach was utilized, with research ranging from the study of fundamental phenomena and analytical modeling to experiments in simulated real-engine environments. Experimental research accounted for 75 percent of the project, and analytical efforts accounted for approximately 25 percent. Extensive experimental datasets were created depicting the three-dimensional flow field, high free-stream turbulence, boundary-layer transition, blade tip region heat transfer, film cooling effects in a simulated engine environment, rough-wall cooling enhancement in a rotating passage, and rotor-stator interaction effects. In addition, analytical modeling of these phenomena was initiated using boundary-layer assumptions as well as Navier-Stokes solutions.

  4. Microcirculation and the physiome projects.

    PubMed

    Bassingthwaighte, James B

    2008-11-01

    The Physiome projects comprise a loosely knit worldwide effort to define the Physiome through databases and theoretical models, with the goal of better understanding the integrative functions of cells, organs, and organisms. The projects involve developing and archiving models, providing centralized databases, and linking experimental information and models from many laboratories into self-consistent frameworks. Increasingly accurate and complete models that embody quantitative biological hypotheses, adhere to high standards, and are publicly available and reproducible, together with refined and curated data, will enable biological scientists to advance integrative, analytical, and predictive approaches to the study of medicine and physiology. This review discusses the rationale and history of the Physiome projects, the role of theoretical models in the development of the Physiome, and the current status of efforts in this area addressing the microcirculation.

  5. Inclusive Education: Is This Horse a Trojan?

    ERIC Educational Resources Information Center

    Slee, Roger

    2006-01-01

    In Canada and elsewhere, governments are expending considerable effort in the production of inclusive education policy texts, resources allocation models, and programs. The author notes that despite of the analytic power and the political intent of inclusive education as a counterpoint to special education, its appropriation is imminent if not…

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Bly, Aaron

    The U.S Department of Energy Light Water Reactor Sustainability (LWRS) Program initiated research in to what is needed in order to provide a roadmap or model for Nuclear Power Plants to reference when building an architecture that can support the growing data supply and demand flowing through their networks. The Digital Architecture project published report Digital Architecture Planning Model (Oxstrand et. al, 2016) discusses things to consider when building an architecture to support the increasing needs and demands of data throughout the plant. Once the plant is able to support the data demands it still needs to be able tomore » provide the data in an easy, quick and reliable method. A common method is to create a “one stop shop” application that a user can go to get all the data they need. The creation of this leads to the need of creating a Seamless Digital Environment (SDE) to integrate all the “siloed” data. An SDE is the desired perception that should be presented to users by gathering the data from any data source (e.g., legacy applications and work management systems) without effort by the user. The goal for FY16 was to complete a feasibility study for data mining and analytics for employing information from computer-based procedures enabled technologies for use in developing improved business analytics. The research team collaborated with multiple organizations to identify use cases or scenarios, which could be beneficial to investigate in a feasibility study. Many interesting potential use cases were identified throughout the FY16 activity. Unfortunately, due to factors out of the research team’s control, none of the studies were initiated this year. However, the insights gained and the relationships built with both PVNGS and NextAxiom will be valuable when moving forward with future research. During the 2016 annual Nuclear Information Technology Strategic Leadership (NITSL) group meeting it was identified would be very beneficial to the industry to support a research effort focused on data analytics. It was suggested that the effort would develop and evaluate use cases for data mining and analytics for employing information from plant sensors and database for use in developing improved business analytics.« less

  7. Modeling microelectrode biosensors: free-flow calibration can substantially underestimate tissue concentrations

    PubMed Central

    Wall, Mark J.

    2016-01-01

    Microelectrode amperometric biosensors are widely used to measure concentrations of analytes in solution and tissue including acetylcholine, adenosine, glucose, and glutamate. A great deal of experimental and modeling effort has been directed at quantifying the response of the biosensors themselves; however, the influence that the macroscopic tissue environment has on biosensor response has not been subjected to the same level of scrutiny. Here we identify an important issue in the way microelectrode biosensors are calibrated that is likely to have led to underestimations of analyte tissue concentrations. Concentration in tissue is typically determined by comparing the biosensor signal to that measured in free-flow calibration conditions. In a free-flow environment the concentration of the analyte at the outer surface of the biosensor can be considered constant. However, in tissue the analyte reaches the biosensor surface by diffusion through the extracellular space. Because the enzymes in the biosensor break down the analyte, a density gradient is set up resulting in a significantly lower concentration of analyte near the biosensor surface. This effect is compounded by the diminished volume fraction (porosity) and reduction in the diffusion coefficient due to obstructions (tortuosity) in tissue. We demonstrate this effect through modeling and experimentally verify our predictions in diffusive environments. NEW & NOTEWORTHY Microelectrode biosensors are typically calibrated in a free-flow environment where the concentrations at the biosensor surface are constant. However, when in tissue, the analyte reaches the biosensor via diffusion and so analyte breakdown by the biosensor results in a concentration gradient and consequently a lower concentration around the biosensor. This effect means that naive free-flow calibration will underestimate tissue concentration. We develop mathematical models to better quantify the discrepancy between the calibration and tissue environment and experimentally verify our key predictions. PMID:27927788

  8. Modeling microelectrode biosensors: free-flow calibration can substantially underestimate tissue concentrations.

    PubMed

    Newton, Adam J H; Wall, Mark J; Richardson, Magnus J E

    2017-03-01

    Microelectrode amperometric biosensors are widely used to measure concentrations of analytes in solution and tissue including acetylcholine, adenosine, glucose, and glutamate. A great deal of experimental and modeling effort has been directed at quantifying the response of the biosensors themselves; however, the influence that the macroscopic tissue environment has on biosensor response has not been subjected to the same level of scrutiny. Here we identify an important issue in the way microelectrode biosensors are calibrated that is likely to have led to underestimations of analyte tissue concentrations. Concentration in tissue is typically determined by comparing the biosensor signal to that measured in free-flow calibration conditions. In a free-flow environment the concentration of the analyte at the outer surface of the biosensor can be considered constant. However, in tissue the analyte reaches the biosensor surface by diffusion through the extracellular space. Because the enzymes in the biosensor break down the analyte, a density gradient is set up resulting in a significantly lower concentration of analyte near the biosensor surface. This effect is compounded by the diminished volume fraction (porosity) and reduction in the diffusion coefficient due to obstructions (tortuosity) in tissue. We demonstrate this effect through modeling and experimentally verify our predictions in diffusive environments. NEW & NOTEWORTHY Microelectrode biosensors are typically calibrated in a free-flow environment where the concentrations at the biosensor surface are constant. However, when in tissue, the analyte reaches the biosensor via diffusion and so analyte breakdown by the biosensor results in a concentration gradient and consequently a lower concentration around the biosensor. This effect means that naive free-flow calibration will underestimate tissue concentration. We develop mathematical models to better quantify the discrepancy between the calibration and tissue environment and experimentally verify our key predictions. Copyright © 2017 the American Physiological Society.

  9. Seamless Digital Environment – Plan for Data Analytics Use Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna Helene; Bly, Aaron Douglas

    The U.S Department of Energy Light Water Reactor Sustainability (LWRS) Program initiated research in to what is needed in order to provide a roadmap or model for Nuclear Power Plants to reference when building an architecture that can support the growing data supply and demand flowing through their networks. The Digital Architecture project published report Digital Architecture Planning Model (Oxstrand et. al, 2016) discusses things to consider when building an architecture to support the increasing needs and demands of data throughout the plant. Once the plant is able to support the data demands it still needs to be able tomore » provide the data in an easy, quick and reliable method. A common method is to create a “one stop shop” application that a user can go to get all the data they need. The creation of this leads to the need of creating a Seamless Digital Environment (SDE) to integrate all the “siloed” data. An SDE is the desired perception that should be presented to users by gathering the data from any data source (e.g., legacy applications and work management systems) without effort by the user. The goal for FY16 was to complete a feasibility study for data mining and analytics for employing information from computer-based procedures enabled technologies for use in developing improved business analytics. The research team collaborated with multiple organizations to identify use cases or scenarios, which could be beneficial to investigate in a feasibility study. Many interesting potential use cases were identified throughout the FY16 activity. Unfortunately, due to factors out of the research team’s control, none of the studies were initiated this year. However, the insights gained and the relationships built with both PVNGS and NextAxiom will be valuable when moving forward with future research. During the 2016 annual Nuclear Information Technology Strategic Leadership (NITSL) group meeting it was identified would be very beneficial to the industry to support a research effort focused on data analytics. It was suggested that the effort would develop and evaluate use cases for data mining and analytics for employing information from plant sensors and database for use in developing improved business analytics.« less

  10. Life prediction and constitutive models for engine hot section anisotropic materials program

    NASA Technical Reports Server (NTRS)

    Nissley, D. M.; Meyer, T. G.

    1992-01-01

    This report presents the results from a 35 month period of a program designed to develop generic constitutive and life prediction approaches and models for nickel-based single crystal gas turbine airfoils. The program is composed of a base program and an optional program. The base program addresses the high temperature coated single crystal regime above the airfoil root platform. The optional program investigates the low temperature uncoated single crystal regime below the airfoil root platform including the notched conditions of the airfoil attachment. Both base and option programs involve experimental and analytical efforts. Results from uniaxial constitutive and fatigue life experiments of coated and uncoated PWA 1480 single crystal material form the basis for the analytical modeling effort. Four single crystal primary orientations were used in the experiments: (001), (011), (111), and (213). Specific secondary orientations were also selected for the notched experiments in the optional program. Constitutive models for an overlay coating and PWA 1480 single crystal material were developed based on isothermal hysteresis loop data and verified using thermomechanical (TMF) hysteresis loop data. A fatigue life approach and life models were selected for TMF crack initiation of coated PWA 1480. An initial life model used to correlate smooth and notched fatigue data obtained in the option program shows promise. Computer software incorporating the overlay coating and PWA 1480 constitutive models was developed.

  11. Using i2b2 to Bootstrap Rural Health Analytics and Learning Networks

    PubMed Central

    Harris, Daniel R.; Baus, Adam D.; Harper, Tamela J.; Jarrett, Traci D.; Pollard, Cecil R.; Talbert, Jeffery C.

    2017-01-01

    We demonstrate that the open-source i2b2 (Informatics for Integrating Biology and the Bedside) data model can be used to bootstrap rural health analytics and learning networks. These networks promote communication and research initiatives by providing the infrastructure necessary for sharing data and insights across a group of healthcare and research partners. Data integration remains a crucial challenge in connecting rural healthcare sites with a common data sharing and learning network due to the lack of interoperability and standards within electronic health records. The i2b2 data model acts as a point of convergence for disparate data from multiple healthcare sites. A consistent and natural data model for healthcare data is essential for overcoming integration issues, but challenges such as those caused by weak data standardization must still be addressed. We describe our experience in the context of building the West Virginia/Kentucky Health Analytics and Learning Network, a collaborative, multi-state effort connecting rural healthcare sites. PMID:28261006

  12. Using i2b2 to Bootstrap Rural Health Analytics and Learning Networks.

    PubMed

    Harris, Daniel R; Baus, Adam D; Harper, Tamela J; Jarrett, Traci D; Pollard, Cecil R; Talbert, Jeffery C

    2016-08-01

    We demonstrate that the open-source i2b2 (Informatics for Integrating Biology and the Bedside) data model can be used to bootstrap rural health analytics and learning networks. These networks promote communication and research initiatives by providing the infrastructure necessary for sharing data and insights across a group of healthcare and research partners. Data integration remains a crucial challenge in connecting rural healthcare sites with a common data sharing and learning network due to the lack of interoperability and standards within electronic health records. The i2b2 data model acts as a point of convergence for disparate data from multiple healthcare sites. A consistent and natural data model for healthcare data is essential for overcoming integration issues, but challenges such as those caused by weak data standardization must still be addressed. We describe our experience in the context of building the West Virginia/Kentucky Health Analytics and Learning Network, a collaborative, multi-state effort connecting rural healthcare sites.

  13. Evaluation of simplified stream-aquifer depletion models for water rights administration

    USGS Publications Warehouse

    Sophocleous, Marios; Koussis, Antonis; Martin, J.L.; Perkins, S.P.

    1995-01-01

    We assess the predictive accuracy of Glover's (1974) stream-aquifer analytical solutions, which are commonly used in administering water rights, and evaluate the impact of the assumed idealizations on administrative and management decisions. To achieve these objectives, we evaluate the predictive capabilities of the Glover stream-aquifer depletion model against the MODFLOW numerical standard, which, unlike the analytical model, can handle increasing hydrogeologic complexity. We rank-order and quantify the relative importance of the various assumptions on which the analytical model is based, the three most important being: (1) streambed clogging as quantified by streambed-aquifer hydraulic conductivity contrast; (2) degree of stream partial penetration; and (3) aquifer heterogeneity. These three factors relate directly to the multidimensional nature of the aquifer flow conditions. From these considerations, future efforts to reduce the uncertainty in stream depletion-related administrative decisions should primarily address these three factors in characterizing the stream-aquifer process. We also investigate the impact of progressively coarser model grid size on numerically estimating stream leakage and conclude that grid size effects are relatively minor. Therefore, when modeling is required, coarser model grids could be used thus minimizing the input data requirements.

  14. Climate Analytics as a Service

    NASA Technical Reports Server (NTRS)

    Schnase, John L.; Duffy, Daniel Q.; McInerney, Mark A.; Webster, W. Phillip; Lee, Tsengdar J.

    2014-01-01

    Climate science is a big data domain that is experiencing unprecedented growth. In our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS). CAaaS combines high-performance computing and data-proximal analytics with scalable data management, cloud computing virtualization, the notion of adaptive analytics, and a domain-harmonized API to improve the accessibility and usability of large collections of climate data. MERRA Analytic Services (MERRA/AS) provides an example of CAaaS. MERRA/AS enables MapReduce analytics over NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) data collection. The MERRA reanalysis integrates observational data with numerical models to produce a global temporally and spatially consistent synthesis of key climate variables. The effectiveness of MERRA/AS has been demonstrated in several applications. In our experience, CAaaS is providing the agility required to meet our customers' increasing and changing data management and data analysis needs.

  15. Aeromechanical stability of a hingeless rotor in hover and forward flight: Analysis and wind tunnel tests

    NASA Technical Reports Server (NTRS)

    Yeager, W. T., Jr.; Hamouda, M. N. H.; Mantay, W. R.

    1983-01-01

    A research effort of analysis and testing was conducted to investigate the ground resonance phenomenon of a soft in-plane hingeless rotor. Experimental data were obtained using a 9 ft. (2.74 m) diameter model rotor in hover and forward flight. Eight model rotor configurations were investigated. Configuration parameters included pitch flap coupling, blade sweep and droop, and precone of the blade feathering axis. An analysis based on a comprehensive analytical model of rotorcraft aerodynamics and dynamics was used. The moving block was used to experimentally determine the regressing lead lag mode damping. Good agreement was obtained between the analysis and test. Both analysis and experiment indicated ground resonance instability in hover. An outline of the analysis, a description of the experimental model and procedures, and comparison of the analytical and experimental data are presented.

  16. An Update of the Analytical Groundwater Modeling to Assess Water Resource Impacts at the Afton Solar Energy Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, John J.; Greer, Christopher B.; Carr, Adrianne E.

    2014-10-01

    The purpose of this study is to update a one-dimensional analytical groundwater flow model to examine the influence of potential groundwater withdrawal in support of utility-scale solar energy development at the Afton Solar Energy Zone (SEZ) as a part of the Bureau of Land Management’s (BLM’s) Solar Energy Program. This report describes the modeling for assessing the drawdown associated with SEZ groundwater pumping rates for a 20-year duration considering three categories of water demand (high, medium, and low) based on technology-specific considerations. The 2012 modeling effort published in the Final Programmatic Environmental Impact Statement for Solar Energy Development in Sixmore » Southwestern States (Solar PEIS; BLM and DOE 2012) has been refined based on additional information described below in an expanded hydrogeologic discussion.« less

  17. Combustion Fundamentals Research

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Increased emphasis is placed on fundamental and generic research at Lewis Research Center with less systems development efforts. This is especially true in combustion research, where the study of combustion fundamentals has grown significantly in order to better address the perceived long term technical needs of the aerospace industry. The main thrusts for this combustion fundamentals program area are as follows: analytical models of combustion processes, model verification experiments, fundamental combustion experiments, and advanced numeric techniques.

  18. MSFC Advanced Concepts Office and the Iterative Launch Vehicle Concept Method

    NASA Technical Reports Server (NTRS)

    Creech, Dennis

    2011-01-01

    This slide presentation reviews the work of the Advanced Concepts Office (ACO) at Marshall Space Flight Center (MSFC) with particular emphasis on the method used to model launch vehicles using INTegrated ROcket Sizing (INTROS), a modeling system that assists in establishing the launch concept design, and stage sizing, and facilitates the integration of exterior analytic efforts, vehicle architecture studies, and technology and system trades and parameter sensitivities.

  19. Assessment of eutrophication in estuaries: Pressure-state-response and source apportionment

    Treesearch

    David Whitall; Suzanne Bricker

    2006-01-01

    The National Estuarine Eutrophication Assessment (NEEA) Update Program is a management oriented program designed to improve monitoring and assessment efforts through the development of type specific classification of estuaries that will allow improved assessment methods and development of analytical and research models and tools for managers which will help guide and...

  20. The potential influence of rain on airfoil performance

    NASA Technical Reports Server (NTRS)

    Dunham, R. Earl, Jr.

    1987-01-01

    The potential influence of heavy rain on airfoil performance is discussed. Experimental methods for evaluating rain effects are reviewed. Important scaling considerations for extrapolating model data are presented. It is shown that considerable additional effort, both analytical and experimental, is necessary to understand the degree of hazard associated with flight operations in rain.

  1. Cultivating Institutional Capacities for Learning Analytics

    ERIC Educational Resources Information Center

    Lonn, Steven; McKay, Timothy A.; Teasley, Stephanie D.

    2017-01-01

    This chapter details the process the University of Michigan developed to build institutional capacity for learning analytics. A symposium series, faculty task force, fellows program, research grants, and other initiatives are discussed, with lessons learned for future efforts and how other institutions might adapt such efforts to spur cultural…

  2. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.

  3. Improving a regional model using reduced complexity and parameter estimation.

    PubMed

    Kelson, Victor A; Hunt, Randall J; Haitjema, Henk M

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.

  4. Mechanical Characterization and Micromechanical Modeling of Woven Carbon/Copper Composites

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Pindera, Marek-Jerzy; Ellis, David L.; Miner, Robert V.

    1997-01-01

    The present investigation examines the in-plane mechanical behavior of a particular woven metal matrix composite (MMC); 8-harness (8H) satin carbon/copper (C/Cu). This is accomplished via mechanical testing as well as micromechanical modeling. While the literature is replete with experimental and modeling efforts for woven and braided polymer matrix composites, little work has been done on woven and braided MMC's. Thus, the development and understanding of woven MMC's is at an early stage. 8H satin C/Cu owes its existence to the high thermal conductivity of copper and low density and thermal expansion of carbon fibers. It is a candidate material for high heat flux applications, such as space power radiator panels. The experimental portion of this investigation consists of monotonic and cyclic tension, compression, and Iosipescu shear tests, as well as combined tension-compression tests. Tests were performed on composite specimens with three copper matrix alloy types: pure Cu, Cu-0.5 weight percent Ti (Cu-Ti), and Cu-0.7 weight percent Cr (Cu-Cr). The small alloying additions are present to promote fiber/matrix interfacial bonding. The analytical modeling effort utilizes an approach in which a local micromechanical model is embedded in a global micromechanical model. This approach differs from previously developed analytical models for woven composites in that a true repeating unit cell is analyzed. However, unlike finite element modeling of woven composites, the geometry is sufficiently idealized to allow efficient geometric discretization and efficient execution.

  5. Modeling Momentum Transfer from Kinetic Impacts: Implications for Redirecting Asteroids

    DOE PAGES

    Stickle, A. M.; Atchison, J. A.; Barnouin, O. S.; ...

    2015-05-19

    Kinetic impactors are one way to deflect a potentially hazardous object headed for Earth. The Asteroid Impact and Deflection Assessment (AIDA) mission is designed to test the effectiveness of this approach and is a joint effort between NASA and ESA. The NASA-led portion is the Double Asteroid Redirect Test (DART) and is composed of a ~300-kg spacecraft designed to impact the moon of the binary system 65803 Didymos. The deflection of the moon will be measured by the ESA-led Asteroid Impact Mission (AIM) (which will characterize the moon) and from ground-based observations. Because the material properties and internal structure ofmore » the target are poorly constrained, however, analytical models and numerical simulations must be used to understand the range of potential outcomes. Here, we describe a modeling effort combining analytical models and CTH simulations to determine possible outcomes of the DART impact. We examine a wide parameter space and provide predictions for crater size, ejecta mass, and momentum transfer following the impact into the moon of the Didymos system. For impacts into “realistic” asteroid types, these models produce craters with diameters on the order of 10 m, an imparted Δv of 0.5–2 mm/s and a momentum enhancement of 1.07 to 5 for a highly porous aggregate to a fully dense rock.« less

  6. The Preventive Control of a Dengue Disease Using Pontryagin Minimum Principal

    NASA Astrophysics Data System (ADS)

    Ratna Sari, Eminugroho; Insani, Nur; Lestari, Dwi

    2017-06-01

    Behaviour analysis for host-vector model without control of dengue disease is based on the value of basic reproduction number obtained using next generation matrices. Furthermore, the model is further developed involving a preventive control to minimize the contact between host and vector. The purpose is to obtain an optimal preventive strategy with minimal cost. The Pontryagin Minimum Principal is used to find the optimal control analytically. The derived optimality model is then solved numerically to investigate control effort to reduce infected class.

  7. Maneuver Planning for Conjunction Risk Mitigation with Ground-track Control Requirements

    NASA Technical Reports Server (NTRS)

    McKinley, David

    2008-01-01

    The planning of conjunction Risk Mitigation Maneuvers (RMM) in the presence of ground-track control requirements is analyzed. Past RMM planning efforts on the Aqua, Aura, and Terra spacecraft have demonstrated that only small maneuvers are available when ground-track control requirements are maintained. Assuming small maneuvers, analytical expressions for the effect of a given maneuver on conjunction geometry are derived. The analytical expressions are used to generate a large trade space for initial RMM design. This trade space represents a significant improvement in initial maneuver planning over existing methods that employ high fidelity maneuver models and propagation.

  8. Resources for National Water Savings for Outdoor Water Use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melody, Moya; Stratton, Hannah; Williams, Alison

    2014-05-01

    In support of efforts by the U.S. Environmental Agency's (EPA's) WaterSense program to develop a spreadsheet model for calculating the national water and financial savings attributable to WaterSense certification and labeling of weather-based irrigation controllers, Lawrence Berkeley National Laboratory reviewed reports, technical data, and other information related to outdoor water use and irrigation controllers. In this document we categorize and describe the reviewed references, highlighting pertinent data. We relied on these references when developing model parameters and calculating controller savings. We grouped resources into three major categories: landscapes (section 1); irrigation devices (section 2); and analytical and modeling efforts (sectionmore » 3). Each category is subdivided further as described in its section. References are listed in order of date of publication, most recent first.« less

  9. Africa Knowledge, Data Source, and Analytic Effort (KDAE) Exploration

    DTIC Science & Technology

    2012-08-20

    The World Bank’s web site contains a substantial amount of data, organized by 18 broad topic areas like Agriculture and Rural Development, Education...wb.indicators) <- c(" Agriculture & Rural Development", "Aid Effectiveness", "Climate Change", "Economic Policy & External Debt", "Education", "Energy...Services,Equality))) IV. Model Building #### Function to iterate regression models IOT pick the best ones 75 library(MASS) data.best

  10. Automation effects in a multiloop manual control system

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Mcnally, B. D.

    1986-01-01

    An experimental and analytical study was undertaken to investigate human interaction with a simple multiloop manual control system in which the human's activity was systematically varied by changing the level of automation. The system simulated was the longitudinal dynamics of a hovering helicopter. The automation-systems-stabilized vehicle responses from attitude to velocity to position and also provided for display automation in the form of a flight director. The control-loop structure resulting from the task definition can be considered a simple stereotype of a hierarchical control system. The experimental study was complemented by an analytical modeling effort which utilized simple crossover models of the human operator. It was shown that such models can be extended to the description of multiloop tasks involving preview and precognitive human operator behavior. The existence of time optimal manual control behavior was established for these tasks and the role which internal models may play in establishing human-machine performance was discussed.

  11. Life prediction and constitutive models for engine hot section anisotropic materials program

    NASA Technical Reports Server (NTRS)

    Nissley, D. M.; Meyer, T. G.; Walker, K. P.

    1992-01-01

    This report presents a summary of results from a 7 year program designed to develop generic constitutive and life prediction approaches and models for nickel-based single crystal gas turbine airfoils. The program was composed of a base program and an optional program. The base program addressed the high temperature coated single crystal regime above the airfoil root platform. The optional program investigated the low temperature uncoated single crystal regime below the airfoil root platform including the notched conditions of the airfoil attachment. Both base and option programs involved experimental and analytical efforts. Results from uniaxial constitutive and fatigue life experiments of coated and uncoated PWA 1480 single crystal material formed the basis for the analytical modeling effort. Four single crystal primary orientations were used in the experiments: group of zone axes (001), group of zone axes (011), group of zone axes (111), and group of zone axes (213). Specific secondary orientations were also selected for the notched experiments in the optional program. Constitutive models for an overlay coating and PWA 1480 single crystal materials were developed based on isothermal hysteresis loop data and verified using thermomechanical (TMF) hysteresis loop data. A fatigue life approach and life models were developed for TMF crack initiation of coated PWA 1480. A life model was developed for smooth and notched fatigue in the option program. Finally, computer software incorporating the overlay coating and PWA 1480 constitutive and life models was developed.

  12. Cryogenic Fluid Storage Technology Development: Recent and Planned Efforts at NASA

    NASA Technical Reports Server (NTRS)

    Moran, Matthew E.

    2009-01-01

    Recent technology development work conducted at NASA in the area of Cryogenic Fluid Management (CFM) storage is highlighted, including summary results, key impacts, and ongoing efforts. Thermodynamic vent system (TVS) ground test results are shown for hydrogen, methane, and oxygen. Joule-Thomson (J-T) device tests related to clogging in hydrogen are summarized, along with the absence of clogging in oxygen and methane tests. Confirmation of analytical relations and bonding techniques for broad area cooling (BAC) concepts based on tube-to-tank tests are presented. Results of two-phase lumped-parameter computational fluid dynamic (CFD) models are highlighted, including validation of the model with hydrogen self pressurization test data. These models were used to simulate Altair representative methane and oxygen tanks subjected to 210 days of lunar surface storage. Engineering analysis tools being developed to support system level trades and vehicle propulsion system designs are also cited. Finally, prioritized technology development risks identified for Constellation cryogenic propulsion systems are presented, and future efforts to address those risks are discussed.

  13. A Tale of Two Teachers: An Analytical Look at the Co-Teaching Theory Using a Case Study Model

    ERIC Educational Resources Information Center

    Grant, Marquis

    2014-01-01

    Co-teaching involves a highly collaborative, mutually accountable relationship between a regular education and special education teacher in an inclusive environment. Effective co-teaching involves both teachers working together in the regular classroom setting in an effort to make learning accessible for all students regardless of ability or…

  14. FAST COGNITIVE AND TASK ORIENTED, ITERATIVE DATA DISPLAY (FACTOID)

    DTIC Science & Technology

    2017-06-01

    approaches. As a result, the following assumptions guided our efforts in developing modeling and descriptive metrics for evaluation purposes...Application Evaluation . Our analytic workflow for evaluation is to first provide descriptive statistics about applications across metrics (performance...distributions for evaluation purposes because the goal of evaluation is accurate description , not inference (e.g., prediction). Outliers depicted

  15. Nondestructive assessment of timber bridges using a vibration-based method

    Treesearch

    Xiping Wang; James P. Wacker; Robert J. Ross; Brian K. Brashaw

    2005-01-01

    This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...

  16. Lateral Stability Simulation of a Rail Truck on Roller Rig

    NASA Astrophysics Data System (ADS)

    Dukkipati, Rao V.

    The development of experimental facilities for rail vehicle testing is being complemented by analytic studies. The purpose of this effort has been to gain insight into the dynamics of rail vehicles in order to guide development of the Roller Rigs and to establish an analytic framework for the design and interpretation of tests to be conducted on Roller Rigs. The work described here represents initial efforts towards meeting these objectives. Generic linear models were developed of a freight car (with a characteristic North American three-piece truck) on tangent track. The models were developed using the generalized multi body dynamics software MEDYNA. Predictions were made of the theoretical linear model hunting (lateral stability) characteristics of the freight car, i. e., the critical speeds and frequencies, for five different configurations: (a) freight car on track, (b) the freight car's front truck on the roller stand and its rear truck on track, (c) freight car on the roller rig, (d) a single truck on track, and (e) single truck on the roller stand. These were compared with the Association of American Railroads' field test data for an 80-ton hopper car equipped with A-3 ride control trucks. Agreement was reached among all the analytical models, with all models indicating a range of hunting speeds of 2% from the highest to lowest. The largest discrepancy, approximately 6%, was indicated between the models and the field test data. Parametric study results using linear model of freight truck on the roller rig show that (a) increasing roller radius increases critical speed (b) increasing the wheel initial cone angle will decrease the hunting speed (c) increasing the roller cant increases hunting speed (d) decrowning of the wheelset on the rollers will not effect the hunting speed but induces longitudinal destabilizing horizontal forces at the contact and (e) lozenging of wheelset on the rollers induces a yaw moment and the hunting speed decreases with increasing wheelset yaw angle.

  17. Enhanced Adaptive Management: Integrating Decision Analysis, Scenario Analysis and Environmental Modeling for the Everglades

    PubMed Central

    Convertino, Matteo; Foran, Christy M.; Keisler, Jeffrey M.; Scarlett, Lynn; LoSchiavo, Andy; Kiker, Gregory A.; Linkov, Igor

    2013-01-01

    We propose to enhance existing adaptive management efforts with a decision-analytical approach that can guide the initial selection of robust restoration alternative plans and inform the need to adjust these alternatives in the course of action based on continuously acquired monitoring information and changing stakeholder values. We demonstrate an application of enhanced adaptive management for a wetland restoration case study inspired by the Florida Everglades restoration effort. We find that alternatives designed to reconstruct the pre-drainage flow may have a positive ecological impact, but may also have high operational costs and only marginally contribute to meeting other objectives such as reduction of flooding. Enhanced adaptive management allows managers to guide investment in ecosystem modeling and monitoring efforts through scenario and value of information analyses to support optimal restoration strategies in the face of uncertain and changing information. PMID:24113217

  18. Modeling and Analysis of Power Processing Systems (MAPPS), initial phase 2

    NASA Technical Reports Server (NTRS)

    Yu, Y.; Lee, F. C.; Wangenheim, H.; Warren, D.

    1977-01-01

    The overall objective of the program is to provide the engineering tools to reduce the analysis, design, and development effort, and thus the cost, in achieving the required performances for switching regulators and dc-dc converter systems. The program was both tutorial and application oriented. Various analytical methods were described in detail and supplemented with examples, and those with standardization appeals were reduced into computer-based subprograms. Major program efforts included those concerning small and large signal control-dependent performance analysis and simulation, control circuit design, power circuit design and optimization, system configuration study, and system performance simulation. Techniques including discrete time domain, conventional frequency domain, Lagrange multiplier, nonlinear programming, and control design synthesis were employed in these efforts. To enhance interactive conversation between the modeling and analysis subprograms and the user, a working prototype of the Data Management Program was also developed to facilitate expansion as future subprogram capabilities increase.

  19. Seamless Digital Environment – Data Analytics Use Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna

    Multiple research efforts in the U.S Department of Energy Light Water Reactor Sustainability (LWRS) Program studies the need and design of an underlying architecture to support the increased amount and use of data in the nuclear power plant. More specifically the three LWRS research efforts; Digital Architecture for an Automated Plant, Automated Work Packages, Computer-Based Procedures for Field Workers, and the Online Monitoring efforts all have identified the need for a digital architecture and more importantly the need for a Seamless Digital Environment (SDE). A SDE provides a mean to access multiple applications, gather the data points needed, conduct themore » analysis requested, and present the result to the user with minimal or no effort by the user. During the 2016 annual Nuclear Information Technology Strategic Leadership (NITSL) group meeting the nuclear utilities identified the need for research focused on data analytics. The effort was to develop and evaluate use cases for data mining and analytics for employing information from plant sensors and database for use in developing improved business analytics. The goal of the study is to research potential approaches to building an analytics solution for equipment reliability, on a small scale, focusing on either a single piece of equipment or a single system. The analytics solution will likely consist of a data integration layer, predictive and machine learning layer and the user interface layer that will display the output of the analysis in a straight forward, easy to consume manner. This report describes the use case study initiated by NITSL and conducted in a collaboration between Idaho National Laboratory, Arizona Public Service – Palo Verde Nuclear Generating Station, and NextAxiom Inc.« less

  20. Thermal Effects Modeling Developed for Smart Structures

    NASA Technical Reports Server (NTRS)

    Lee, Ho-Jun

    1998-01-01

    Applying smart materials in aeropropulsion systems may improve the performance of aircraft engines through a variety of vibration, noise, and shape-control applications. To facilitate the experimental characterization of these smart structures, researchers have been focusing on developing analytical models to account for the coupled mechanical, electrical, and thermal response of these materials. One focus of current research efforts has been directed toward incorporating a comprehensive thermal analysis modeling capability. Typically, temperature affects the behavior of smart materials by three distinct mechanisms: Induction of thermal strains because of coefficient of thermal expansion mismatch 1. Pyroelectric effects on the piezoelectric elements; 2. Temperature-dependent changes in material properties; and 3. Previous analytical models only investigated the first two thermal effects mechanisms. However, since the material properties of piezoelectric materials generally vary greatly with temperature (see the graph), incorporating temperature-dependent material properties will significantly affect the structural deflections, sensory voltages, and stresses. Thus, the current analytical model captures thermal effects arising from all three mechanisms through thermopiezoelectric constitutive equations. These constitutive equations were incorporated into a layerwise laminate theory with the inherent capability to model both the active and sensory response of smart structures in thermal environments. Corresponding finite element equations were formulated and implemented for both the beam and plate elements to provide a comprehensive thermal effects modeling capability.

  1. Experimental and analytical studies of flow through a ventral and axial exhaust nozzle system for STOVL aircraft

    NASA Technical Reports Server (NTRS)

    Esker, Barbara S.; Debonis, James R.

    1991-01-01

    Flow through a combined ventral and axial exhaust nozzle system was studied experimentally and analytically. The work is part of an ongoing propulsion technology effort at NASA Lewis Research Center for short takeoff, vertical landing (STOVL) aircraft. The experimental investigation was done on the NASA Lewis Powered Lift Facility. The experiment consisted of performance testing over a range of tailpipe pressure ratios from 1 to 3.2 and flow visualization. The analytical investigation consisted of modeling the same configuration and solving for the flow using the PARC3D computational fluid dynamics program. The comparison of experimental and analytical results was very good. The ventral nozzle performance coefficients obtained from both the experimental and analytical studies agreed within 1.2 percent. The net horizontal thrust of the nozzle system contained a significant reverse thrust component created by the flow overturning in the ventral duct. This component resulted in a low net horizontal thrust coefficient. The experimental and analytical studies showed very good agreement in the internal flow patterns.

  2. Material Modeling of Space Shuttle Leading Edge and External Tank Materials For Use in the Columbia Accident Investigation

    NASA Technical Reports Server (NTRS)

    Carney, Kelly; Melis, Matthew; Fasanella, Edwin L.; Lyle, Karen H.; Gabrys, Jonathan

    2004-01-01

    Upon the commencement of the analytical effort to characterize the impact dynamics and damage of the Space Shuttle Columbia leading edge due to External Tank insulating foam, the necessity of creating analytical descriptions of these materials became evident. To that end, material models were developed of the leading edge thermal protection system, Reinforced Carbon Carbon (RCC), and a low density polyurethane foam, BX-250. Challenges in modeling the RCC include its extreme brittleness, the differing behavior in compression and tension, and the anisotropic fabric layup. These effects were successfully included in LS-DYNA Material Model 58, *MAT_LAMINATED_ COMPOSITE_ FABRIC. The differing compression and tension behavior was modeled using the available damage parameters. Each fabric layer was given an integration point in the shell element, and was allowed to fail independently. Comparisons were made to static test data and coupon ballistic impact tests before being utilized in the full scale analysis. The foam's properties were typical of elastic automotive foams; and LS-DYNA Material Model 83, *MAT_FU_CHANG_FOAM, was successfully used to model its behavior. Material parameters defined included strain rate dependent stress-strain curves for both loading and un-loading, and for both compression and tension. This model was formulated with static test data and strain rate dependent test data, and was compared to ballistic impact tests on load-cell instrumented aluminum plates. These models were subsequently utilized in analysis of the Shuttle leading edge full scale ballistic impact tests, and are currently being used in the Return to Flight Space Shuttle re-certification effort.

  3. Integration of Rotor Aerodynamic Optimization with the Conceptual Design of a Large Civil Tiltrotor

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.

    2010-01-01

    Coupling of aeromechanics analysis with vehicle sizing is demonstrated with the CAMRAD II aeromechanics code and NDARC sizing code. The example is optimization of cruise tip speed with rotor/wing interference for the Large Civil Tiltrotor (LCTR2) concept design. Free-wake models were used for both rotors and the wing. This report is part of a NASA effort to develop an integrated analytical capability combining rotorcraft aeromechanics, structures, propulsion, mission analysis, and vehicle sizing. The present paper extends previous efforts by including rotor/wing interference explicitly in the rotor performance optimization and implicitly in the sizing.

  4. Fluid dynamic mechanisms and interactions within separated flows

    NASA Astrophysics Data System (ADS)

    Dutton, J. C.; Addy, A. L.

    1990-02-01

    The significant results of a joint research effort investigating the fundamental fluid dynamic mechanisms and interactions within high-speed separated flows are presented in detail. The results have obtained through analytical and numerical approaches, but with primary emphasis on experimental investigations of missile and projectile base flow-related configurations. The objectives of the research program focus on understanding the component mechanisms and interactions which establish and maintain high-speed separated flow regions. The analytical and numerical efforts have centered on unsteady plume-wall interactions in rocket launch tubes and on predictions of the effects of base bleed on transonic and supersonic base flowfields. The experimental efforts have considered the development and use of a state-of-the-art two component laser Doppler velocimeter (LDV) system for experiments with planar, two-dimensional, small-scale models in supersonic flows. The LDV experiments have yielded high quality, well documented mean and turbulence velocity data for a variety of high-speed separated flows including initial shear layer development, recompression/reattachment processes for two supersonic shear layers, oblique shock wave/turbulent boundary layer interactions in a compression corner, and two-stream, supersonic, near-wake flow behind a finite-thickness base.

  5. Correlation of finite element free vibration predictions using random vibration test data. M.S. Thesis - Cleveland State Univ.

    NASA Technical Reports Server (NTRS)

    Chambers, Jeffrey A.

    1994-01-01

    Finite element analysis is regularly used during the engineering cycle of mechanical systems to predict the response to static, thermal, and dynamic loads. The finite element model (FEM) used to represent the system is often correlated with physical test results to determine the validity of analytical results provided. Results from dynamic testing provide one means for performing this correlation. One of the most common methods of measuring accuracy is by classical modal testing, whereby vibratory mode shapes are compared to mode shapes provided by finite element analysis. The degree of correlation between the test and analytical mode shapes can be shown mathematically using the cross orthogonality check. A great deal of time and effort can be exhausted in generating the set of test acquired mode shapes needed for the cross orthogonality check. In most situations response data from vibration tests are digitally processed to generate the mode shapes from a combination of modal parameters, forcing functions, and recorded response data. An alternate method is proposed in which the same correlation of analytical and test acquired mode shapes can be achieved without conducting the modal survey. Instead a procedure is detailed in which a minimum of test information, specifically the acceleration response data from a random vibration test, is used to generate a set of equivalent local accelerations to be applied to the reduced analytical model at discrete points corresponding to the test measurement locations. The static solution of the analytical model then produces a set of deformations that once normalized can be used to represent the test acquired mode shapes in the cross orthogonality relation. The method proposed has been shown to provide accurate results for both a simple analytical model as well as a complex space flight structure.

  6. The Wide-Field Imaging Interferometry Testbed: Enabling Techniques for High Angular Resolution Astronomy

    NASA Technical Reports Server (NTRS)

    Rinehart, S. A.; Armstrong, T.; Frey, Bradley J.; Jung, J.; Kirk, J.; Leisawitz, David T.; Leviton, Douglas B.; Lyon, R.; Maher, Stephen; Martino, Anthony J.; hide

    2007-01-01

    The Wide-Field Imaging Interferometry Testbed (WIIT) was designed to develop techniques for wide-field of view imaging interferometry, using "double-Fourier" methods. These techniques will be important for a wide range of future spacebased interferometry missions. We have provided simple demonstrations of the methodology already, and continuing development of the testbed will lead to higher data rates, improved data quality, and refined algorithms for image reconstruction. At present, the testbed effort includes five lines of development; automation of the testbed, operation in an improved environment, acquisition of large high-quality datasets, development of image reconstruction algorithms, and analytical modeling of the testbed. We discuss the progress made towards the first four of these goals; the analytical modeling is discussed in a separate paper within this conference.

  7. Review and assessment of the database and numerical modeling for turbine heat transfer

    NASA Technical Reports Server (NTRS)

    Gladden, H. J.; Simoneau, R. J.

    1988-01-01

    The objectives of the HOST Turbine Heat Transfer subproject were to obtain a better understanding of the physics of the aerothermodynamic phenomena and to assess and improve the analytical methods used to predict the flow and heat transfer in high-temperature gas turbines. At the time the HOST project was initiated, an across-the-board improvement in turbine design technology was needed. A building-block approach was utilized and the research ranged from the study of fundamental phenomena and modeling to experiments in simulated real engine environments. Experimental research accounted for approximately 75 percent of the funding while the analytical efforts were approximately 25 percent. A healthy government/industry/university partnership, with industry providing almost half of the research, was created to advance the turbine heat transfer design technology base.

  8. Modeling ozone episodes in the Baltimore-Washington region

    NASA Technical Reports Server (NTRS)

    Ryan, William F.

    1994-01-01

    Surface ozone (O3) concentrations in excess of the National Ambient Air Quality Standard (NAAQS) continue to occur in metropolitan areas in the United States despite efforts to control emissions of O3 precursors. Future O3 control strategies will be based on results from modeling efforts that have just begun in many areas. Two initial questions that arise are model sensitivity to domain-specific conditions and the selection of episodes for model evaluation and control strategy development. For the Baltimore-Washington region (B-W), the presence of the Chesapeake Bay introduces a number of issues relevant to model sensitivity. In this paper, the specific questions of the determination of model volume (mixing height) for the Urban Airshed Model (UAM) is discussed and various alternative methods compared. For the latter question, several analytic approaches, Cluster Analysis and classification and Regression Tree (CART) analysis are undertaken to determine meteorological conditions associated with severe O3 events in the B-W domain.

  9. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Leung, Martin S. K.

    1995-01-01

    The objective of this research effort was to develop a real-time guidance approach for launch vehicles ascent to orbit injection. Various analytical approaches combined with a variety of model order and model complexity reduction have been investigated. Singular perturbation methods were first attempted and found to be unsatisfactory. The second approach based on regular perturbation analysis was subsequently investigated. It also fails because the aerodynamic effects (ignored in the zero order solution) are too large to be treated as perturbations. Therefore, the study demonstrates that perturbation methods alone (both regular and singular perturbations) are inadequate for use in developing a guidance algorithm for the atmospheric flight phase of a launch vehicle. During a second phase of the research effort, a hybrid analytic/numerical approach was developed and evaluated. The approach combines the numerical methods of collocation and the analytical method of regular perturbations. The concept of choosing intelligent interpolating functions is also introduced. Regular perturbation analysis allows the use of a crude representation for the collocation solution, and intelligent interpolating functions further reduce the number of elements without sacrificing the approximation accuracy. As a result, the combined method forms a powerful tool for solving real-time optimal control problems. Details of the approach are illustrated in a fourth order nonlinear example. The hybrid approach is then applied to the launch vehicle problem. The collocation solution is derived from a bilinear tangent steering law, and results in a guidance solution for the entire flight regime that includes both atmospheric and exoatmospheric flight phases.

  10. Nondestructive assessment of single-span timber bridges using a vibration- based method

    Treesearch

    Xiping Wang; James P. Wacker; Angus M. Morison; John W. Forsman; John R. Erickson; Robert J. Ross

    2005-01-01

    This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...

  11. The Sunk Cost Effect in Pigeons and Humans

    ERIC Educational Resources Information Center

    Navarro, Anton D.; Fantino, Edmund

    2005-01-01

    The sunk cost effect is the increased tendency to persist in an endeavor once an investment of money, effort, or time has been made. To date, humans are the only animal in which this effect has been observed unambiguously. We developed a behavior-analytic model of the sunk cost effect to explore the potential for this behavior in pigeons as well…

  12. Research Initiatives and Preliminary Results In Automation Design In Airspace Management in Free Flight

    NASA Technical Reports Server (NTRS)

    Corker, Kevin; Lebacqz, J. Victor (Technical Monitor)

    1997-01-01

    The NASA and the FAA have entered into a joint venture to explore, define, design and implement a new airspace management operating concept. The fundamental premise of that concept is that technologies and procedures need to be developed for flight deck and ground operations to improve the efficiency, the predictability, the flexibility and the safety of airspace management and operations. To that end NASA Ames has undertaken an initial development and exploration of "key concepts" in the free flight airspace management technology development. Human Factors issues in automation aiding design, coupled aiding systems between air and ground, communication protocols in distributed decision making, and analytic techniques for definition of concepts of airspace density and operator cognitive load have been undertaken. This paper reports the progress of these efforts, which are not intended to definitively solve the many evolving issues of design for future ATM systems, but to provide preliminary results to chart the parameters of performance and the topology of the analytic effort required. The preliminary research in provision of cockpit display of traffic information, dynamic density definition, distributed decision making, situation awareness models and human performance models is discussed as they focus on the theme of "design requirements".

  13. Lead Slowing-Down Spectrometry Time Spectral Analysis for Spent Fuel Assay: FY11 Status Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulisek, Jonathan A.; Anderson, Kevin K.; Bowyer, Sonya M.

    2011-09-30

    Developing a method for the accurate, direct, and independent assay of the fissile isotopes in bulk materials (such as used fuel) from next-generation domestic nuclear fuel cycles is a goal of the Office of Nuclear Energy, Fuel Cycle R&D, Material Protection and Control Technology (MPACT) Campaign. To meet this goal, MPACT supports a multi-institutional collaboration, of which PNNL is a part, to study the feasibility of Lead Slowing Down Spectroscopy (LSDS). This technique is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic masses in used fuel with an uncertaintymore » considerably lower than the approximately 10% typical of today's confirmatory assay methods. This document is a progress report for FY2011 PNNL analysis and algorithm development. Progress made by PNNL in FY2011 continues to indicate the promise of LSDS analysis and algorithms applied to used fuel. PNNL developed an empirical model based on calibration of the LSDS to responses generated from well-characterized used fuel. The empirical model, which accounts for self-shielding effects using empirical basis vectors calculated from the singular value decomposition (SVD) of a matrix containing the true self-shielding functions of the used fuel assembly models. The potential for the direct and independent assay of the sum of the masses of 239Pu and 241Pu to within approximately 3% over a wide used fuel parameter space was demonstrated. Also, in FY2011, PNNL continued to develop an analytical model. Such efforts included the addition of six more non-fissile absorbers in the analytical shielding function and the non-uniformity of the neutron flux across the LSDS assay chamber. A hybrid analytical-empirical approach was developed to determine the mass of total Pu (sum of the masses of 239Pu, 240Pu, and 241Pu), which is an important quantity in safeguards. Results using this hybrid method were of approximately the same accuracy as the pure empirical approach. In addition, total Pu with much better accuracy with the hybrid approach than the pure analytical approach. In FY2012, PNNL will continue efforts to optimize its empirical model and minimize its reliance on calibration data. In addition, PNNL will continue to develop an analytical model, considering effects such as neutron-scattering in the fuel and cladding, as well as neutrons streaming through gaps between fuel pins in the fuel assembly.« less

  14. Analytical Plan for Roman Glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strachan, Denis M.; Buck, Edgar C.; Mueller, Karl T.

    Roman glasses that have been in the sea or underground for about 1800 years can serve as the independent “experiment” that is needed for validation of codes and models that are used in performance assessment. Two sets of Roman-era glasses have been obtained for this purpose. One set comes from the sunken vessel the Iulia Felix; the second from recently excavated glasses from a Roman villa in Aquileia, Italy. The specimens contain glass artifacts and attached sediment or soil. In the case of the Iulia Felix glasses quite a lot of analytical work has been completed at the University ofmore » Padova, but from an archaeological perspective. The glasses from Aquileia have not been so carefully analyzed, but they are similar to other Roman glasses. Both glass and sediment or soil need to be analyzed and are the subject of this analytical plan. The glasses need to be analyzed with the goal of validating the model used to describe glass dissolution. The sediment and soil need to be analyzed to determine the profile of elements released from the glass. This latter need represents a significant analytical challenge because of the trace quantities that need to be analyzed. Both pieces of information will yield important information useful in the validation of the glass dissolution model and the chemical transport code(s) used to determine the migration of elements once released from the glass. In this plan, we outline the analytical techniques that should be useful in obtaining the needed information and suggest a useful starting point for this analytical effort.« less

  15. Technical, analytical and computer support

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The development of a rigorous mathematical model for the design and performance analysis of cylindrical silicon-germanium thermoelectric generators is reported that consists of two parts, a steady-state (static) and a transient (dynamic) part. The material study task involves the definition and implementation of a material study that aims to experimentally characterize the long term behavior of the thermoelectric properties of silicon-germanium alloys as a function of temperature. Analytical and experimental efforts are aimed at the determination of the sublimation characteristics of silicon germanium alloys and the study of sublimation effects on RTG performance. Studies are also performed on a variety of specific topics on thermoelectric energy conversion.

  16. Static Footprint Local Forces, Areas, and Aspect Ratios for Three Type 7 Aircraft Tires

    NASA Technical Reports Server (NTRS)

    Howell, William E.; Perez, Sharon E.; Vogler, William A.

    1991-01-01

    The National Tire Modeling Program (NTMP) is a joint NASA/industry effort to improve the understanding of tire mechanics and develop accurate analytical design tools. This effort includes fundamental analytical and experimental research on the structural mechanics of tires. Footprint local forces, areas, and aspect ratios were measured. Local footprint forces in the vertical, lateral, and drag directions were measured with a special footprint force transducer. Measurements of the local forces in the footprint were obtained by positioning the transducer at specified locations within the footprint and externally loading the tires. Three tires were tested: (1) one representative of those used on the main landing gear of B-737 and DC-9 commercial transport airplanes, (2) a nose landing gear tire for the Space Shuttle Orbiter, and (3) a main landing gear tire for the Space Shuttle Orbiter. Data obtained for various inflation pressures and vertical loads are presented for two aircraft tires. The results are presented in graphical and tabulated forms.

  17. Space Storable Propellant Performance Gas/Liquid Like-Doublet Injector Characterization

    NASA Technical Reports Server (NTRS)

    Falk, A. Y.

    1972-01-01

    A 30-month applied research program was conducted, encompassing an analytical, design, and experimental effort to relate injector design parameters to simultaneous attainment of high performance and component (injector/thrust chamber) compatibility for gas/liquid space-storable propellants. The gas/liquid propellant combination selected for study was FLOX (82.6% F2)/ambient temperature gaseous methane. The injector pattern characterized was the like-(self)-impinging doublet. Program effort was apportioned into four basic technical tasks: injector and thrust chamber design, injector and thrust chamber fabrication, performance evaluation testing, and data evaluation and reporting. Analytical parametric combustion analyses and cold flow distribution and atomization experiments were conducted with injector segment models to support design of injector/thrust chamber combinations for hot fire evaluation. Hot fire tests were conducted to: (1) optimize performance of the injector core elements, and (2) provide design criteria for the outer zone elements so that injector/thrust chamber compatibility could be achieved with only minimal performance losses.

  18. Verification of an analytic modeler for capillary pump loop thermal control systems

    NASA Technical Reports Server (NTRS)

    Schweickart, R. B.; Neiswanger, L.; Ku, J.

    1987-01-01

    A number of computer programs have been written to model two-phase heat transfer systems for space use. These programs support the design of thermal control systems and provide a method of predicting their performance in the wide range of thermal environments of space. Predicting the performance of one such system known as the capillary pump loop (CPL) is the intent of the CPL Modeler. By modeling two developed CPL systems and comparing the results with actual test data, the CPL Modeler has proven useful in simulating CPL operation. Results of the modeling effort are discussed, together with plans for refinements to the modeler.

  19. Effect of risk perception on epidemic spreading in temporal networks

    NASA Astrophysics Data System (ADS)

    Moinet, Antoine; Pastor-Satorras, Romualdo; Barrat, Alain

    2018-01-01

    Many progresses in the understanding of epidemic spreading models have been obtained thanks to numerous modeling efforts and analytical and numerical studies, considering host populations with very different structures and properties, including complex and temporal interaction networks. Moreover, a number of recent studies have started to go beyond the assumption of an absence of coupling between the spread of a disease and the structure of the contacts on which it unfolds. Models including awareness of the spread have been proposed, to mimic possible precautionary measures taken by individuals that decrease their risk of infection, but have mostly considered static networks. Here, we adapt such a framework to the more realistic case of temporal networks of interactions between individuals. We study the resulting model by analytical and numerical means on both simple models of temporal networks and empirical time-resolved contact data. Analytical results show that the epidemic threshold is not affected by the awareness but that the prevalence can be significantly decreased. Numerical studies on synthetic temporal networks highlight, however, the presence of very strong finite-size effects, resulting in a significant shift of the effective epidemic threshold in the presence of risk awareness. For empirical contact networks, the awareness mechanism leads as well to a shift in the effective threshold and to a strong reduction of the epidemic prevalence.

  20. The MODE family of facility class experiments

    NASA Technical Reports Server (NTRS)

    Miller, David W.

    1992-01-01

    The objective of the Middeck 0-gravity Dynamics Experiment (MODE) is to characterize fundamental 0-g slosh behavior and obtain quantitative data on slosh force and spacecraft response for correlation of the analytical model. The topics are presented in viewgraph form and include the following: space results; STA objectives, requirements, and approach; comparison of ground to orbital data for the baseline configuration; conclusions of orbital testing; flight experiment resources; Middeck Active Control Experiment (MACE); MACE 1-G and 0-G models; and future efforts.

  1. An approximate analytical solution for describing surface runoff and sediment transport over hillslope

    NASA Astrophysics Data System (ADS)

    Tao, Wanghai; Wang, Quanjiu; Lin, Henry

    2018-03-01

    Soil and water loss from farmland causes land degradation and water pollution, thus continued efforts are needed to establish mathematical model for quantitative analysis of relevant processes and mechanisms. In this study, an approximate analytical solution has been developed for overland flow model and sediment transport model, offering a simple and effective means to predict overland flow and erosion under natural rainfall conditions. In the overland flow model, the flow regime was considered to be transitional with the value of parameter β (in the kinematic wave model) approximately two. The change rate of unit discharge with distance was assumed to be constant and equal to the runoff rate at the outlet of the plane. The excess rainfall was considered to be constant under uniform rainfall conditions. The overland flow model developed can be further applied to natural rainfall conditions by treating excess rainfall intensity as constant over a small time interval. For the sediment model, the recommended values of the runoff erosion calibration constant (cr) and the splash erosion calibration constant (cf) have been given in this study so that it is easier to use the model. These recommended values are 0.15 and 0.12, respectively. Comparisons with observed results were carried out to validate the proposed analytical solution. The results showed that the approximate analytical solution developed in this paper closely matches the observed data, thus providing an alternative method of predicting runoff generation and sediment yield, and offering a more convenient method of analyzing the quantitative relationships between variables. Furthermore, the model developed in this study can be used as a theoretical basis for developing runoff and erosion control methods.

  2. Description and comparison of selected models for hydrologic analysis of ground-water flow, St Joseph River basin, Indiana

    USGS Publications Warehouse

    Peters, J.G.

    1987-01-01

    The Indiana Department of Natural Resources (IDNR) is developing water-management policies designed to assess the effects of irrigation and other water uses on water supply in the basin. In support of this effort, the USGS, in cooperation with IDNR, began a study to evaluate appropriate methods for analyzing the effects of pumping on ground-water levels and streamflow in the basin 's glacial aquifer systems. Four analytical models describe drawdown for a nonleaky, confined aquifer and fully penetrating well; a leaky, confined aquifer and fully penetrating well; a leaky, confined aquifer and partially penetrating well; and an unconfined aquifer and partially penetrating well. Analytical equations, simplifying assumptions, and methods of application are described for each model. In addition to these four models, several other analytical models were used to predict the effects of ground-water pumping on water levels in the aquifer and on streamflow in local areas with up to two pumping wells. Analytical models for a variety of other hydrogeologic conditions are cited. A digital ground-water flow model was used to describe how a numerical model can be applied to a glacial aquifer system. The numerical model was used to predict the effects of six pumping plans in 46.5 sq mi area with as many as 150 wells. Water budgets for the six pumping plans were used to estimate the effect of pumping on streamflow reduction. Results of the analytical and numerical models indicate that, in general, the glacial aquifers in the basin are highly permeable. Radial hydraulic conductivity calculated by the analytical models ranged from 280 to 600 ft/day, compared to 210 and 360 ft/day used in the numerical model. Maximum seasonal pumping for irrigation produced maximum calculated drawdown of only one-fourth of available drawdown and reduced streamflow by as much as 21%. Analytical models are useful in estimating aquifer properties and predicting local effects of pumping in areas with simple lithology and boundary conditions and with few pumping wells. Numerical models are useful in regional areas with complex hydrogeology with many pumping wells and provide detailed water budgets useful for estimating the sources of water in pumping simulations. Numerical models are useful in constructing flow nets. The choice of which type of model to use is also based on the nature and scope of questions to be answered and on the degree of accuracy required. (Author 's abstract)

  3. Development of flexural vibration inspection techniques to rapidly assess the structural health of timber bridge systems

    Treesearch

    Xiping Wang; James P. Wacker; Robert J. Ross; Brian K. Brashaw; Robert Vatalaro

    2005-01-01

    This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...

  4. Hypersonic research engine project. Phase 2: Aerothermodynamic integration model development

    NASA Technical Reports Server (NTRS)

    Jilly, L. F. (Editor)

    1970-01-01

    The analytical effort was directed towards (1) completing the design of the combustor exit instrumentation assembly, (2) analyzing the coolant flow distribution of the cowl leading edge tip section, (3) determining effects of purge gas pressure on AIM performance analysis, and (4) analyzing heat transfer and associated stress problems related to the cowl leading edge tip section and the nozzle shroud assembly for test conditions.

  5. Fault Detection of Rotating Machinery using the Spectral Distribution Function

    NASA Technical Reports Server (NTRS)

    Davis, Sanford S.

    1997-01-01

    The spectral distribution function is introduced to characterize the process leading to faults in rotating machinery. It is shown to be a more robust indicator than conventional power spectral density estimates, but requires only slightly more computational effort. The method is illustrated with examples from seeded gearbox transmission faults and an analytical model of a defective bearing. Procedures are suggested for implementation in realistic environments.

  6. High efficiency silicon solar cell review

    NASA Technical Reports Server (NTRS)

    Godlewski, M. P. (Editor)

    1975-01-01

    An overview is presented of the current research and development efforts to improve the performance of the silicon solar cell. The 24 papers presented reviewed experimental and analytic modeling work which emphasizes the improvment of conversion efficiency and the reduction of manufacturing costs. A summary is given of the round-table discussion, in which the near- and far-term directions of future efficiency improvements were discussed.

  7. Productivity and injectivity of horizontal wells. Quarterly report, October 1--December 31, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fayers, F.J.; Aziz, K.; Hewett, T.A.

    1993-03-10

    A number of activities have been carried out in the last three months. A list outlining these efforts is presented below followed by brief description of each activity in the subsequent sections of this report: Progress is being made on the development of a black oil three-phase simulator which will allow the use of a generalized Voronoi grid in the plane perpendicular to a horizontal well. The available analytical solutions in the literature for calculating productivity indices (Inflow Performance) of horizontal wells have been reviewed. The pseudo-steady state analytic model of Goode and Kuchuk has been applied to an examplemore » problem. A general mechanistic two-phase flow model is under development. The model is capable of predicting flow transition boundaries for a horizontal pipe at any inclination angle. It also has the capability of determining pressure drops and holdups for all the flow regimes. A large code incorporating all the features of the model has been programmed and is currently being tested.« less

  8. Impact and Penetration Simulations for Composite Wing-like Structures

    NASA Technical Reports Server (NTRS)

    Knight, Norman F.

    1998-01-01

    The goal of this research project was to develop methodologies for the analysis of wing-like structures subjected to impact loadings. Low-speed impact causing either no damage or only minimal damage and high-speed impact causing severe laminate damage and possible penetration of the structure were to be considered during this research effort. To address this goal, an assessment of current analytical tools for impact analysis was performed. Assessment of the analytical tools for impact and penetration simulations with regard to accuracy, modeling, and damage modeling was considered as well as robustness, efficient, and usage in a wing design environment. Following a qualitative assessment, selected quantitative evaluations will be performed using the leading simulation tools. Based on this assessment, future research thrusts for impact and penetration simulation of composite wing-like structures were identified.

  9. Review and assessment of the database and numerical modeling for turbine heat transfer

    NASA Technical Reports Server (NTRS)

    Gladden, H. J.; Simoneau, R. J.

    1989-01-01

    The objectives of the NASA Hot Section Technology (HOST) Turbine Heat Transfer subproject were to obtain a better understanding of the physics of the aerothermodynamic phenomena and to assess and improve the analytical methods used to predict the flow and heat transfer in high-temperature gas turbines. At the time the HOST project was initiated, an across-the-board improvement in turbine design technology was needed. A building-block approach was utilized and the research ranged from the study of fundamental phenomena and modeling to experiments in simulated real engine environments. Experimental research accounted for approximately 75 percent of the funding while the analytical efforts were approximately 25 percent. A healthy government/industry/university partnership, with industry providing almost half of the research, was created to advance the turbine heat transfer design technology base.

  10. Active Figure Control Effects on Mounting Strategy for X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Kolodziejczak, Jeffery J.; Atkins, Carolyn; Roche, Jacqueline M.; ODell, Stephen L.; Ramsey, Brian D.; Elsner, Ronald F.; Weisskopf, Martin C.; Gubarev, Mikhail V.

    2014-01-01

    As part of ongoing development efforts at MSFC, we have begun to investigate mounting strategies for highly nested xray optics in both full-shell and segmented configurations. The analytical infrastructure for this effort also lends itself to investigation of active strategies. We expect that a consequence of active figure control on relatively thin substrates is that errors are propagated to the edges, where they might affect the effective precision of the mounting points. Based upon modeling, we describe parametrically, the conditions under which active mounts are preferred over fixed ones, and the effect of active figure corrections on the required number, locations, and kinematic characteristics of mounting points.

  11. Experimental and Analytical Studies for a Computational Materials Program

    NASA Technical Reports Server (NTRS)

    Knauss, W. G.

    1999-01-01

    The studies supported by Grant NAG1-1780 were directed at providing physical data on polymer behavior that would form the basis for computationally modeling these types of materials. Because of ongoing work in polymer characterization this grant supported part of a larger picture in this regard. Efforts went into two combined areas of their time dependent mechanical response characteristics: Creep properties on the one hand, subject to different volumetric changes (nonlinearly viscoelastic behavior) and time or frequency dependence of dilatational material behavior. The details of these endeavors are outlined sufficiently in the two appended publications, so that no further description of the effort is necessary.

  12. Advances in Scientific Balloon Thermal Modeling

    NASA Technical Reports Server (NTRS)

    Bohaboj, T.; Cathey, H. M., Jr.

    2004-01-01

    The National Aeronautics and Space Administration's Balloon Program office has long acknowledged that the accurate modeling of balloon performance and flight prediction is dependant on how well the balloon is thermally modeled. This ongoing effort is focused on developing accurate balloon thermal models that can be used to quickly predict balloon temperatures and balloon performance. The ability to model parametric changes is also a driver for this effort. This paper will present the most recent advances made in this area. This research effort continues to utilize the "Thrmal Desktop" addition to AUTO CAD for the modeling. Recent advances have been made by using this analytical tool. A number of analyses have been completed to test the applicability of this tool to the problem with very positive results. Progressively detailed models have been developed to explore the capabilities of the tool as well as to provide guidance in model formulation. A number of parametric studies have been completed. These studies have varied the shape of the structure, material properties, environmental inputs, and model geometry. These studies have concentrated on spherical "proxy models" for the initial development stages and then to transition to the natural shaped zero pressure and super pressure balloons. An assessment of required model resolution has also been determined. Model solutions have been cross checked with known solutions via hand calculations. The comparison of these cases will also be presented. One goal is to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed full models. This papa presents the step by step advances made as part of this effort, capabilities, limitations, and the lessons learned. Also presented are the plans for further thermal modeling work.

  13. Workshop on Current Issues in Predictive Approaches to Intelligence and Security Analytics: Fostering the Creation of Decision Advantage through Model Integration and Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.

    2010-05-23

    The increasing asymmetric nature of threats to the security, health and sustainable growth of our society requires that anticipatory reasoning become an everyday activity. Currently, the use of anticipatory reasoning is hindered by the lack of systematic methods for combining knowledge- and evidence-based models, integrating modeling algorithms, and assessing model validity, accuracy and utility. The workshop addresses these gaps with the intent of fostering the creation of a community of interest on model integration and evaluation that may serve as an aggregation point for existing efforts and a launch pad for new approaches.

  14. Learning Analytics across a Statewide System

    ERIC Educational Resources Information Center

    Buyarski, Catherine; Murray, Jim; Torstrick, Rebecca

    2017-01-01

    This chapter explores lessons learned from two different learning analytics efforts at a large, public, multicampus university--one internally developed and one vended platform. It raises questions about how to best use analytics to support students while keeping students responsible for their own learning and success.

  15. Building analytic capacity, facilitating partnerships, and promoting data use in state health agencies: a distance-based workforce development initiative applied to maternal and child health epidemiology.

    PubMed

    Rankin, Kristin M; Kroelinger, Charlan D; Rosenberg, Deborah; Barfield, Wanda D

    2012-12-01

    The purpose of this article is to summarize the methodology, partnerships, and products developed as a result of a distance-based workforce development initiative to improve analytic capacity among maternal and child health (MCH) epidemiologists in state health agencies. This effort was initiated by the Centers for Disease Control's MCH Epidemiology Program and faculty at the University of Illinois at Chicago to encourage and support the use of surveillance data by MCH epidemiologists and program staff in state agencies. Beginning in 2005, distance-based training in advanced analytic skills was provided to MCH epidemiologists. To support participants, this model of workforce development included: lectures about the practical application of innovative epidemiologic methods, development of multidisciplinary teams within and across agencies, and systematic, tailored technical assistance The goal of this initiative evolved to emphasize the direct application of advanced methods to the development of state data products using complex sample surveys, resulting in the articles published in this supplement to MCHJ. Innovative methods were applied by participating MCH epidemiologists, including regional analyses across geographies and datasets, multilevel analyses of state policies, and new indicator development. Support was provided for developing cross-state and regional partnerships and for developing and publishing the results of analytic projects. This collaboration was successful in building analytic capacity, facilitating partnerships and promoting surveillance data use to address state MCH priorities, and may have broader application beyond MCH epidemiology. In an era of decreasing resources, such partnership efforts between state and federal agencies and academia are essential for promoting effective data use.

  16. Rayleigh-Taylor and Richtmyer-Meshkov instability induced flow, turbulence, and mixing. I

    NASA Astrophysics Data System (ADS)

    Zhou, Ye

    2017-12-01

    Rayleigh-Taylor (RT) and Richtmyer-Meshkov (RM) instabilities play an important role in a wide range of engineering, geophysical, and astrophysical flows. They represent a triggering event that, in many cases, leads to large-scale turbulent mixing. Much effort has been expended over the past 140 years, beginning with the seminal work of Lord Rayleigh, to predict the evolution of the instabilities and of the instability-induced mixing layers. The objective of Part I of this review is to provide the basic properties of the flow, turbulence, and mixing induced by RT, RM, and Kelvin-Helmholtz (KH) instabilities. Historical efforts to study these instabilities are briefly reviewed, and the significance of these instabilities is discussed for a variety of flows, particularly for astrophysical flows and for the case of inertial confinement fusion. Early experimental efforts are described, and analytical attempts to model the linear, and nonlinear regimes of these mixing layers are examined. These analytical efforts include models for both single-mode and multi-mode initial conditions, as well as multi-scale models to describe the evolution. Comparisons of these models and theories to experimental and simulation studies are then presented. Next, attention is paid to the issue of the influence of stabilizing mechanisms (e.g., viscosity, surface tension, and diffuse interface) on the evolution of these instabilities, as well as the limitations and successes of numerical methods. Efforts to study these instabilities and mixing layers using group-theoretic ideas, as well as more formal notions of turbulence cascade processes during the later stages of the induced mixing layers, are inspected. A key element of the review is the discussion of the late-time self-similar scaling for the RT and RM growth factors, α and θ. These parameters are influenced by the initial conditions and much of the observed variation can be explained by this. In some cases, these instabilities induced flows can transition to turbulence. Both the spatial and temporal criteria to achieve the transition to turbulence have been examined. Finally, a description of the energy-containing scales in the mixing layers, including energy "injection" and cascade processes are presented in greater detail. Part II of this review is designed to provide a much broader and in-depth understanding of this critical area of research (Zhou, 2017. Physics Reports, 723-725, 1-160).

  17. Multi-Dimensional Calibration of Impact Dynamic Models

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.

    2011-01-01

    NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.

  18. Analysis and Test of a Proton Exchange Membrane Fuel Cell Power System for Space Power Applications

    NASA Technical Reports Server (NTRS)

    Vasquez, Arturo; Varanauski, Donald; Clark, Robert, Jr.

    2000-01-01

    An effort is underway to develop a prototype Proton Exchange Membrane (PEM) Fuel Cell breadboard system for fuhlre space applications. This prototype will be used to develop a comprehensive design basis for a space-rated PEM fuel cell powerplant. The prototype system includes reactant pressure regulators, ejector-based reactant pumps, a 4-kW fuel cell stack and cooling system, and a passive, membranebased oxygen / water separator. A computer model is being developed concurrently to analytically predict fluid flow in the oxidant reactant system. Fuel cells have historically played an important role in human-rated spacecraft. The Gemini and Apollo spacecraft used fuel cells for vehicle electrical power. The Space Shuttle currently uses three Alkaline Fuel Cell Powerplants (AFCP) to generate all of the vehicle's 15-20kW electrical power. Engineers at the Johnson Space Center have leveraged off the development effort ongoing in the commercial arena to develop PEM fuel cel ls for terrestrial uses. The prototype design originated from efforts to develop a PEM fuel cell replacement for the current Space Shuttle AFCP' s. In order to improve on the life and an already excellent hi storical record of reliability and safety, three subsystems were focused on. These were the fuel cell stack itself, the reactant circulation devices, and reactant / product water separator. PEM fuel cell stack performance is already demonstrating the potential for greater than four times the useful life of the current Shuttle's AFCP. Reactant pumping for product water removal has historically been accomplished with mechanical pumps. Ejectors offer an effective means of reactant pumping as well as the potential for weight reduction, control simplification, and long life. Centrifugal water separation is used on the current AFCP. A passive, membrane-based water separator offers compatibility with the micro-gravity environment of space, and the potential for control simplification, elimination of moving parts in an oxygen environment, and long life. The prototype system has been assembled from components that have previously been tested and evaluated at the component level. Preliminary data obtained from tests performed with the prototype system, as well as other published data, has been used to validate the analytical component models. These components have been incorporated into an integrated oxidant fluid system model. Results obtained from both the performance tests and the analytical model are presented.

  19. Advancing statistical analysis of ambulatory assessment data in the study of addictive behavior: A primer on three person-oriented techniques.

    PubMed

    Foster, Katherine T; Beltz, Adriene M

    2018-08-01

    Ambulatory assessment (AA) methodologies have the potential to increase understanding and treatment of addictive behavior in seemingly unprecedented ways, due in part, to their emphasis on intensive repeated assessments of an individual's addictive behavior in context. But, many analytic techniques traditionally applied to AA data - techniques that average across people and time - do not fully leverage this potential. In an effort to take advantage of the individualized, temporal nature of AA data on addictive behavior, the current paper considers three underutilized person-oriented analytic techniques: multilevel modeling, p-technique, and group iterative multiple model estimation. After reviewing prevailing analytic techniques, each person-oriented technique is presented, AA data specifications are mentioned, an example analysis using generated data is provided, and advantages and limitations are discussed; the paper closes with a brief comparison across techniques. Increasing use of person-oriented techniques will substantially enhance inferences that can be drawn from AA data on addictive behavior and has implications for the development of individualized interventions. Copyright © 2017. Published by Elsevier Ltd.

  20. Using remotely sensed data and elementary analytical techniques in post-katrina mississippi to examine storm damage modeling

    Treesearch

    Curtis A. Collins; David L. Evans; Keith L. Belli; Patrick A. Glass

    2010-01-01

    Hurricane Katrina’s passage through south Mississippi on August 29, 2005, which damaged or destroyed thousands of hectares of forest land, was followed by massive salvage, cleanup, and assessment efforts. An initial assessment by the Mississippi Forestry Commission estimated that over $1 billion in raw wood material was downed by the storm, with county-level damage...

  1. Family Background and Occupational Attainment: Replication and Extension Through a 24-Year Follow-Up. Pennsylvania State University, A.E. & R.S. 128, April 1977.

    ERIC Educational Resources Information Center

    Gansemer, Lawrence P.; Bealer, Robert C.

    Using data generated from the records of 460 rural-reared Pennsylvania males contacted initially as sophomores in 1947 and again in 1957 and 1971, an effort was made to replicate the tradition of path analytic, causal modeling of status attainment in American society and to assess the empirical efficacy of certain family input variables not…

  2. Fatigue life prediction modeling for turbine hot section materials

    NASA Technical Reports Server (NTRS)

    Halford, G. R.; Meyer, T. G.; Nelson, R. S.; Nissley, D. M.; Swanson, G. A.

    1989-01-01

    A major objective of the fatigue and fracture efforts under the NASA Hot Section Technology (HOST) program was to significantly improve the analytic life prediction tools used by the aeronautical gas turbine engine industry. This was achieved in the areas of high-temperature thermal and mechanical fatigue of bare and coated high-temperature superalloys. The cyclic crack initiation and propagation resistance of nominally isotropic polycrystalline and highly anisotropic single crystal alloys were addressed. Life prediction modeling efforts were devoted to creep-fatigue interaction, oxidation, coatings interactions, multiaxiality of stress-strain states, mean stress effects, cumulative damage, and thermomechanical fatigue. The fatigue crack initiation life models developed to date include the Cyclic Damage Accumulation (CDA) and the Total Strain Version of Strainrange Partitioning (TS-SRP) for nominally isotropic materials, and the Tensile Hysteretic Energy Model for anisotropic superalloys. A fatigue model is being developed based upon the concepts of Path-Independent Integrals (PII) for describing cyclic crack growth under complex nonlinear response at the crack tip due to thermomechanical loading conditions. A micromechanistic oxidation crack extension model was derived. The models are described and discussed.

  3. Fatigue life prediction modeling for turbine hot section materials

    NASA Technical Reports Server (NTRS)

    Halford, G. R.; Meyer, T. G.; Nelson, R. S.; Nissley, D. M.; Swanson, G. A.

    1988-01-01

    A major objective of the fatigue and fracture efforts under the Hot Section Technology (HOST) program was to significantly improve the analytic life prediction tools used by the aeronautical gas turbine engine industry. This was achieved in the areas of high-temperature thermal and mechanical fatigue of bare and coated high-temperature superalloys. The cyclic crack initiation and propagation resistance of nominally isotropic polycrystalline and highly anisotropic single crystal alloys were addressed. Life prediction modeling efforts were devoted to creep-fatigue interaction, oxidation, coatings interactions, multiaxiality of stress-strain states, mean stress effects, cumulative damage, and thermomechanical fatigue. The fatigue crack initiation life models developed to date include the Cyclic Damage Accumulation (CDA) and the Total Strain Version of Strainrange Partitioning (TS-SRP) for nominally isotropic materials, and the Tensile Hysteretic Energy Model for anisotropic superalloys. A fatigue model is being developed based upon the concepts of Path-Independent Integrals (PII) for describing cyclic crack growth under complex nonlinear response at the crack tip due to thermomechanical loading conditions. A micromechanistic oxidation crack extension model was derived. The models are described and discussed.

  4. Elements of analytic style: Bion's clinical seminars.

    PubMed

    Ogden, Thomas H

    2007-10-01

    The author finds that the idea of analytic style better describes significant aspects of the way he practices psychoanalysis than does the notion of analytic technique. The latter is comprised to a large extent of principles of practice developed by previous generations of analysts. By contrast, the concept of analytic style, though it presupposes the analyst's thorough knowledge of analytic theory and technique, emphasizes (1) the analyst's use of his unique personality as reflected in his individual ways of thinking, listening, and speaking, his own particular use of metaphor, humor, irony, and so on; (2) the analyst's drawing on his personal experience, for example, as an analyst, an analysand, a parent, a child, a spouse, a teacher, and a student; (3) the analyst's capacity to think in a way that draws on, but is independent of, the ideas of his colleagues, his teachers, his analyst, and his analytic ancestors; and (4) the responsibility of the analyst to invent psychoanalysis freshly for each patient. Close readings of three of Bion's 'Clinical seminars' are presented in order to articulate some of the elements of Bion's analytic style. Bion's style is not presented as a model for others to emulate or, worse yet, imitate; rather, it is described in an effort to help the reader consider from a different vantage point (provided by the concept of analytic style) the way in which he, the reader, practices psychoanalysis.

  5. Automated Performance Prediction of Message-Passing Parallel Programs

    NASA Technical Reports Server (NTRS)

    Block, Robert J.; Sarukkai, Sekhar; Mehra, Pankaj; Woodrow, Thomas S. (Technical Monitor)

    1995-01-01

    The increasing use of massively parallel supercomputers to solve large-scale scientific problems has generated a need for tools that can predict scalability trends of applications written for these machines. Much work has been done to create simple models that represent important characteristics of parallel programs, such as latency, network contention, and communication volume. But many of these methods still require substantial manual effort to represent an application in the model's format. The NIK toolkit described in this paper is the result of an on-going effort to automate the formation of analytic expressions of program execution time, with a minimum of programmer assistance. In this paper we demonstrate the feasibility of our approach, by extending previous work to detect and model communication patterns automatically, with and without overlapped computations. The predictions derived from these models agree, within reasonable limits, with execution times of programs measured on the Intel iPSC/860 and Paragon. Further, we demonstrate the use of MK in selecting optimal computational grain size and studying various scalability metrics.

  6. Insight and Action Analytics: Three Case Studies to Consider

    ERIC Educational Resources Information Center

    Milliron, Mark David; Malcolm, Laura; Kil, David

    2014-01-01

    Civitas Learning was conceived as a community of practice, bringing together forward-thinking leaders from diverse higher education institutions to leverage insight and action analytics in their ongoing efforts to help students learn well and finish strong. We define insight and action analytics as drawing, federating, and analyzing data from…

  7. The potential value of Clostridium difficile vaccine: an economic computer simulation model.

    PubMed

    Lee, Bruce Y; Popovich, Michael J; Tian, Ye; Bailey, Rachel R; Ufberg, Paul J; Wiringa, Ann E; Muder, Robert R

    2010-07-19

    Efforts are currently underway to develop a vaccine against Clostridium difficile infection (CDI). We developed two decision analytic Monte Carlo computer simulation models: (1) an Initial Prevention Model depicting the decision whether to administer C. difficile vaccine to patients at-risk for CDI and (2) a Recurrence Prevention Model depicting the decision whether to administer C. difficile vaccine to prevent CDI recurrence. Our results suggest that a C. difficile vaccine could be cost-effective over a wide range of C. difficile risk, vaccine costs, and vaccine efficacies especially, when being used post-CDI treatment to prevent recurrent disease. (c) 2010 Elsevier Ltd. All rights reserved.

  8. The Potential Value of Clostridium difficile Vaccine: An Economic Computer Simulation Model

    PubMed Central

    Lee, Bruce Y.; Popovich, Michael J.; Tian, Ye; Bailey, Rachel R.; Ufberg, Paul J.; Wiringa, Ann E.; Muder, Robert R.

    2010-01-01

    Efforts are currently underway to develop a vaccine against Clostridium difficile infection (CDI). We developed two decision analytic Monte Carlo computer simulation models: (1) an Initial Prevention Model depicting the decision whether to administer C. difficile vaccine to patients at-risk for CDI and (2) a Recurrence Prevention Model depicting the decision whether to administer C. difficile vaccine to prevent CDI recurrence. Our results suggest that a C. difficile vaccine could be cost-effective over a wide range of C. difficile risk, vaccine costs, and vaccine efficacies especially when being used post-CDI treatment to prevent recurrent disease. PMID:20541582

  9. Kinetic modeling of plant metabolism and its predictive power: peppermint essential oil biosynthesis as an example.

    PubMed

    Lange, Bernd Markus; Rios-Estepa, Rigoberto

    2014-01-01

    The integration of mathematical modeling with analytical experimentation in an iterative fashion is a powerful approach to advance our understanding of the architecture and regulation of metabolic networks. Ultimately, such knowledge is highly valuable to support efforts aimed at modulating flux through target pathways by molecular breeding and/or metabolic engineering. In this article we describe a kinetic mathematical model of peppermint essential oil biosynthesis, a pathway that has been studied extensively for more than two decades. Modeling assumptions and approximations are described in detail. We provide step-by-step instructions on how to run simulations of dynamic changes in pathway metabolites concentrations.

  10. Thermal Integration of a Liquid Acquisition Device into a Cryogenic Feed System

    NASA Technical Reports Server (NTRS)

    Hastings, L. J.; Bolshinskiy, L. G.; Schunk, R. G.; Martin, A. K.; Eskridge, R. H.; Frenkel, A.; Grayson, G.; Pendleton, M. L.

    2011-01-01

    Primary objectives of this effort were to define the following: (1) Approaches for quantification of the accumulation of thermal energy within a capillary screen liquid acquisition device (LAD) for a lunar lander upper stage during periods of up to 210 days on the lunar surface, (2) techniques for mitigating heat entrapment, and (3) perform initial testing, data evaluation. The technical effort was divided into the following categories: (1) Detailed thermal modeling of the LAD/feed system interactions using both COMSOL computational fluid device and standard codes, (2) FLOW-3D modeling of bulk liquid to provide interfacing conditions for the LAD thermal modeling, (3) condensation conditioning of capillary screens to stabilize surface tension retention capability, and (4) subscale testing of an integrated LAD/feed system. Substantial progress was achieved in the following technical areas: (1) Thermal modeling and experimental approaches for evaluating integrated cryogen LAD/feed systems, at both the system and component levels, (2) reduced gravity pressure control analyses, (3) analytical modeling and testing for capillary screen conditioning using condensation and wicking, and (4) development of rapid turnaround testing techniques for evaluating LAD/feed system thermal and fluid integration. A comprehensive effort, participants included a diverse cross section of representatives from academia, contractors, and multiple Marshall Space Flight Center organizations.

  11. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot-induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  12. Prediction of aircraft handling qualities using analytical models of the human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1982-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations is formulated. Finally, a model based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  13. Prediction of response of aircraft panels subjected to acoustic and thermal loads

    NASA Technical Reports Server (NTRS)

    Mei, Chuh

    1992-01-01

    The primary effort of this research project has been focused on the development of analytical methods for the prediction of random response of structural panels subjected to combined and intense acoustic and thermal loads. The accomplishments on various acoustic fatigue research activities are described first, then followed by publications and theses. Topics covered include: transverse shear deformation; finite element models of vibrating composite laminates; large deflection vibration modeling; finite element analysis of thermal buckling; and prediction of three dimensional duct using boundary element method.

  14. CERT: Center of Excellence in Rotorcraft Technology

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The research objectives of this effort are to understand the physical processes that influence the formation of the tip vortex of a rotor in advancing flight, and to develop active and passive means of weakening the tip vortex during conditions when strong blade-vortex-interaction effects are expected. A combined experimental, analytical, and computational effort is being employed. Specifically, the following efforts are being pursued: 1. Analytical evaluation and design of combined elastic tailoring and active material actuators applicable to rotor blade tips. 2. Numerical simulations of active and passive tip devices. 3. LDV Measurement of the near and far wake behind rotors in forward flight.

  15. Options to improve energy efficiency for educational building

    NASA Astrophysics Data System (ADS)

    Jahan, Mafruha

    The cost of energy is a major factor that must be considered for educational facility budget planning purpose. The analysis of energy related issues and options can be complex and requires significant time and detailed effort. One way to facilitate the inclusion of energy option planning in facility planning efforts is to utilize a tool that allows for quick appraisal of the facility energy profile. Once such an appraisal is accomplished, it is then possible to rank energy improvement options consistently with other facility needs and requirements. After an energy efficiency option has been determined to have meaningful value in comparison with other facility planning options, it is then possible to utilize the initial appraisal as the basis for an expanded consideration of additional facility and energy use detail using the same analytic system used for the initial appraisal. This thesis has developed a methodology and an associated analytic model to assist in these tasks and thereby improve the energy efficiency of educational facilities. A detailed energy efficiency and analysis tool is described that utilizes specific university building characteristics such as size, architecture, envelop, lighting, occupancy, thermal design which allows reducing the annual energy consumption. Improving the energy efficiency of various aspects of an educational building's energy performance can be complex and can require significant time and experience to make decisions. The approach developed in this thesis initially assesses the energy design for a university building. This initial appraisal is intended to assist administrators in assessing the potential value of energy efficiency options for their particular facility. Subsequently this scoping design can then be extended as another stage of the model by local facility or planning personnel to add more details and engineering aspects to the initial screening model. This approach can assist university planning efforts to identify the most cost effective combinations of energy efficiency strategies. The model analyzes and compares the payback periods of all proposed Energy Performance Measures (EPMs) to determine which has the greatest potential value.

  16. MODELING AND ANALYSIS OF FISSION PRODUCT TRANSPORT IN THE AGR-3/4 EXPERIMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humrickhouse, Paul W.; Collin, Blaise P.; Hawkes, Grant L.

    In this work we describe the ongoing modeling and analysis efforts in support of the AGR-3/4 experiment. AGR-3/4 is intended to provide data to assess fission product retention and transport (e.g., diffusion coefficients) in fuel matrix and graphite materials. We describe a set of pre-test predictions that incorporate the results of detailed thermal and fission product release models into a coupled 1D radial diffusion model of the experiment, using diffusion coefficients reported in the literature for Ag, Cs, and Sr. We make some comparisons of the predicted Cs profiles to preliminary measured data for Cs and find these to bemore » reasonable, in most cases within an order of magnitude. Our ultimate objective is to refine the diffusion coefficients using AGR-3/4 data, so we identify an analytical method for doing so and demonstrate its efficacy via a series of numerical experiments using the model predictions. Finally, we discuss development of a post-irradiation examination plan informed by the modeling effort and simulate some of the heating tests that are tentatively planned.« less

  17. Multi-scale Modeling of Chromosomal DNA in Living Cells

    NASA Astrophysics Data System (ADS)

    Spakowitz, Andrew

    The organization and dynamics of chromosomal DNA play a pivotal role in a range of biological processes, including gene regulation, homologous recombination, replication, and segregation. Establishing a quantitative theoretical model of DNA organization and dynamics would be valuable in bridging the gap between the molecular-level packaging of DNA and genome-scale chromosomal processes. Our research group utilizes analytical theory and computational modeling to establish a predictive theoretical model of chromosomal organization and dynamics. In this talk, I will discuss our efforts to develop multi-scale polymer models of chromosomal DNA that are both sufficiently detailed to address specific protein-DNA interactions while capturing experimentally relevant time and length scales. I will demonstrate how these modeling efforts are capable of quantitatively capturing aspects of behavior of chromosomal DNA in both prokaryotic and eukaryotic cells. This talk will illustrate that capturing dynamical behavior of chromosomal DNA at various length scales necessitates a range of theoretical treatments that accommodate the critical physical contributions that are relevant to in vivo behavior at these disparate length and time scales. National Science Foundation, Physics of Living Systems Program (PHY-1305516).

  18. Analysis of Prey-Predator Three Species Fishery Model with Harvesting Including Prey Refuge and Migration

    NASA Astrophysics Data System (ADS)

    Roy, Sankar Kumar; Roy, Banani

    In this article, a prey-predator system with Holling type II functional response for the predator population including prey refuge region has been analyzed. Also a harvesting effort has been considered for the predator population. The density-dependent mortality rate for the prey, predator and super predator has been considered. The equilibria of the proposed system have been determined. Local and global stabilities for the system have been discussed. We have used the analytic approach to derive the global asymptotic stabilities of the system. The maximal predator per capita consumption rate has been considered as a bifurcation parameter to evaluate Hopf bifurcation in the neighborhood of interior equilibrium point. Also, we have used fishing effort to harvest predator population of the system as a control to develop a dynamic framework to investigate the optimal utilization of the resource, sustainability properties of the stock and the resource rent is earned from the resource. Finally, we have presented some numerical simulations to verify the analytic results and the system has been analyzed through graphical illustrations.

  19. Design and analysis of radiometric instruments using high-level numerical models and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sorensen, Ira Joseph

    A primary objective of the effort reported here is to develop a radiometric instrument modeling environment to provide complete end-to-end numerical models of radiometric instruments, integrating the optical, electro-thermal, and electronic systems. The modeling environment consists of a Monte Carlo ray-trace (MCRT) model of the optical system coupled to a transient, three-dimensional finite-difference electrothermal model of the detector assembly with an analytic model of the signal-conditioning circuitry. The environment provides a complete simulation of the dynamic optical and electrothermal behavior of the instrument. The modeling environment is used to create an end-to-end model of the CERES scanning radiometer, and its performance is compared to the performance of an operational CERES total channel as a benchmark. A further objective of this effort is to formulate an efficient design environment for radiometric instruments. To this end, the modeling environment is then combined with evolutionary search algorithms known as genetic algorithms (GA's) to develop a methodology for optimal instrument design using high-level radiometric instrument models. GA's are applied to the design of the optical system and detector system separately and to both as an aggregate function with positive results.

  20. Analysis of Active Figure Control Effects on Mounting Strategy for X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Kolodziejczak, Jeffrey J.; Roche, Jacqueline M.; O'Dell, Stephen L.; Ramsey, Brian D.; Elsner, Ryan F.; Gubarev, Mikhail V.; Weisskopf, Martin C.

    2014-01-01

    As part of ongoing development efforts at MSFC, we have begun to investigate mounting strategies for highly nested x-ray optics in both full-shell and segmented configurations. The analytical infrastructure for this effort also lends itself to investigation of active strategies. We expect that a consequence of active figure control on relatively thin substrates is that errors are propagated to the edges, where they might affect the effective precision of the mounting points. Based upon modeling, we describe parametrically, the conditions under which active mounts are preferred over fixed ones, and the effect of active figure corrections on the required number, locations, and kinematic characteristics of mounting points.

  1. Multiple control strategies for prevention of avian influenza pandemic.

    PubMed

    Ullah, Roman; Zaman, Gul; Islam, Saeed

    2014-01-01

    We present the prevention of avian influenza pandemic by adjusting multiple control functions in the human-to-human transmittable avian influenza model. First we show the existence of the optimal control problem; then by using both analytical and numerical techniques, we investigate the cost-effective control effects for the prevention of transmission of disease. To do this, we use three control functions, the effort to reduce the number of contacts with human infected with mutant avian influenza, the antiviral treatment of infected individuals, and the effort to reduce the number of infected birds. We completely characterized the optimal control and compute numerical solution of the optimality system by using an iterative method.

  2. Model and Analytic Processes for Export License Assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Sandra E.; Whitney, Paul D.; Weimar, Mark R.

    2011-09-29

    This paper represents the Department of Energy Office of Nonproliferation Research and Development (NA-22) Simulations, Algorithms and Modeling (SAM) Program's first effort to identify and frame analytical methods and tools to aid export control professionals in effectively predicting proliferation intent; a complex, multi-step and multi-agency process. The report focuses on analytical modeling methodologies that alone, or combined, may improve the proliferation export control license approval process. It is a follow-up to an earlier paper describing information sources and environments related to international nuclear technology transfer. This report describes the decision criteria used to evaluate modeling techniques and tools to determinemore » which approaches will be investigated during the final 2 years of the project. The report also details the motivation for why new modeling techniques and tools are needed. The analytical modeling methodologies will enable analysts to evaluate the information environment for relevance to detecting proliferation intent, with specific focus on assessing risks associated with transferring dual-use technologies. Dual-use technologies can be used in both weapons and commercial enterprises. A decision-framework was developed to evaluate which of the different analytical modeling methodologies would be most appropriate conditional on the uniqueness of the approach, data availability, laboratory capabilities, relevance to NA-22 and Office of Arms Control and Nonproliferation (NA-24) research needs and the impact if successful. Modeling methodologies were divided into whether they could help micro-level assessments (e.g., help improve individual license assessments) or macro-level assessment. Macro-level assessment focuses on suppliers, technology, consumers, economies, and proliferation context. Macro-level assessment technologies scored higher in the area of uniqueness because less work has been done at the macro level. An approach to developing testable hypotheses for the macro-level assessment methodologies is provided. The outcome of this works suggests that we should develop a Bayes Net for micro-level analysis and continue to focus on Bayes Net, System Dynamics and Economic Input/Output models for assessing macro-level problems. Simultaneously, we need to develop metrics for assessing intent in export control, including the risks and consequences associated with all aspects of export control.« less

  3. An analytical approach for predicting pilot induced oscillations

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1981-01-01

    The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion or determining the susceptability of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.

  4. Issues in Developing a Normative Descriptive Model for Dyadic Decision Making

    NASA Technical Reports Server (NTRS)

    Serfaty, D.; Kleinman, D. L.

    1984-01-01

    Most research in modelling human information processing and decision making has been devoted to the case of the single human operator. In the present effort, concepts from the fields of organizational behavior, engineering psychology, team theory and mathematical modelling are merged in an attempt to consider first the case of two cooperating decisionmakers (the Dyad) in a multi-task environment. Rooted in the well-known Dynamic Decision Model (DDM), the normative descriptive approach brings basic cognitive and psychophysical characteristics inherent to human behavior into a team theoretic analytic framework. An experimental paradigm, involving teams in dynamic decision making tasks, is designed to produce the data with which to build the theoretical model.

  5. Strategic enterprise resource planning in a health-care system using a multicriteria decision-making model.

    PubMed

    Lee, Chang Won; Kwak, N K

    2011-04-01

    This paper deals with strategic enterprise resource planning (ERP) in a health-care system using a multicriteria decision-making (MCDM) model. The model is developed and analyzed on the basis of the data obtained from a leading patient-oriented provider of health-care services in Korea. Goal criteria and priorities are identified and established via the analytic hierarchy process (AHP). Goal programming (GP) is utilized to derive satisfying solutions for designing, evaluating, and implementing an ERP. The model results are evaluated and sensitivity analyses are conducted in an effort to enhance the model applicability. The case study provides management with valuable insights for planning and controlling health-care activities and services.

  6. On trying something new: effort and practice in psychoanalytic change.

    PubMed

    Power, D G

    2000-07-01

    This paper describes one of the ingredients of successful psychoanalytic change: the necessity for the analysand to actively attempt altered patterns of thinking, behaving, feeling, and relating outside of the analytic relationship. When successful, such self-initiated attempts at change are founded on insight and experience gained in the transference and constitute a crucial step in the consolidation and transfer of therapeutic gains. The analytic literature related to this aspect of therapeutic action is reviewed, including the work of Freud, Bader, Rangell, Renik, Valenstein, and Wheelis. Recent interest in the complex and complementary relationship between action and increased self-understanding as it unfolds in the analytic setting is extended beyond the consulting room to include the analysand's extra-analytic attempts to initiate change. Contemporary views of the relationship between praxis and self-knowledge are discussed and offered as theoretical support for broadening analytic technique to include greater attention to the analysand's efforts at implementing therapeutic gains. Case vignettes are presented.

  7. Analytical Support Capabilities of Turkish General Staff Scientific Decision Support Centre (SDSC) to Defence Transformation

    DTIC Science & Technology

    2005-04-01

    RTO-MP-SAS-055 4 - 1 UNCLASSIFIED/UNLIMITED UNCLASSIFIED/UNLIMITED Analytical Support Capabilities of Turkish General Staff Scientific...the end failed to achieve anything commensurate with the effort. The analytical support capabilities of Turkish Scientific Decision Support Center to...percent of the İpekkan, Z.; Özkil, A. (2005) Analytical Support Capabilities of Turkish General Staff Scientific Decision Support Centre (SDSC) to

  8. CryoTran user's manual, version 1.0

    NASA Technical Reports Server (NTRS)

    Cowgill, Glenn R.; Chato, David J.; Saad, Ehab

    1989-01-01

    The development of cryogenic fluid management systems for space operation is a major portion of the efforts of the Cryogenic Fluids Technology Office (CFTO) at the NASA Lewis Research Center. Analytical models are a necessary part of experimental programs which are used to verify the results of experiments and are also used as a predictor for parametric studies. The CryoTran computer program is a bridge to obtain analytical results. The object of CryoTran is to coordinate these separate analyses into an integrated framework with a user-friendly interface and a common cryogenic property database. CryoTran is an integrated software system designed to help solve a diverse set of problems involving cryogenic fluid storage and transfer in both ground and low-g environments.

  9. HOST turbine heat transfer program summary

    NASA Technical Reports Server (NTRS)

    Gladden, Herbert J.; Simoneau, Robert J.

    1988-01-01

    The objectives of the HOST Turbine Heat Transfer subproject were to obtain a better understanding of the physics of the aerothermodynamic phenomena and to assess and improve the analytical methods used to predict the flow and heat transfer in high temperature gas turbines. At the time the HOST project was initiated, an across-the-board improvement in turbine design technology was needed. A building-block approach was utilized and the research ranged from the study of fundamental phenomena and modeling to experiments in simulated real engine environments. Experimental research accounted for approximately 75 percent of the funding with the remainder going to analytical efforts. A healthy government/industry/university partnership, with industry providing almost half of the research, was created to advance the turbine heat transfer design technology base.

  10. Physics-based and human-derived information fusion for analysts

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Nagy, James; Scott, Steve; Okoth, Joshua; Hinman, Michael

    2017-05-01

    Recent trends in physics-based and human-derived information fusion (PHIF) have amplified the capabilities of analysts; however with the big data opportunities there is a need for open architecture designs, methods of distributed team collaboration, and visualizations. In this paper, we explore recent trends in the information fusion to support user interaction and machine analytics. Challenging scenarios requiring PHIF include combing physics-based video data with human-derived text data for enhanced simultaneous tracking and identification. A driving effort would be to provide analysts with applications, tools, and interfaces that afford effective and affordable solutions for timely decision making. Fusion at scale should be developed to allow analysts to access data, call analytics routines, enter solutions, update models, and store results for distributed decision making.

  11. Optimal estimation of large structure model errors. [in Space Shuttle controller design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1979-01-01

    In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.

  12. NASA's Climate Data Services Initiative

    NASA Astrophysics Data System (ADS)

    McInerney, M.; Duffy, D.; Schnase, J. L.; Webster, W. P.

    2013-12-01

    Our understanding of the Earth's processes is based on a combination of observational data records and mathematical models. The size of NASA's space-based observational data sets is growing dramatically as new missions come online. However a potentially bigger data challenge is posed by the work of climate scientists, whose models are regularly producing data sets of hundreds of terabytes or more. It is important to understand that the 'Big Data' challenge of climate science cannot be solved with a single technological approach or an ad hoc assemblage of technologies. It will require a multi-faceted, well-integrated suite of capabilities that include cloud computing, large-scale compute-storage systems, high-performance analytics, scalable data management, and advanced deployment mechanisms in addition to the existing, well-established array of mature information technologies. It will also require a coherent organizational effort that is able to focus on the specific and sometimes unique requirements of climate science. Given that it is the knowledge that is gained from data that is of ultimate benefit to society, data publication and data analytics will play a particularly important role. In an effort to accelerate scientific discovery and innovation through broader use of climate data, NASA Goddard Space Flight Center's Office of Computational and Information Sciences and Technology has embarked on a determined effort to build a comprehensive, integrated data publication and analysis capability for climate science. The Climate Data Services (CDS) Initiative integrates people, expertise, and technology into a highly-focused, next-generation, one-stop climate science information service. The CDS Initiative is providing the organizational framework, processes, and protocols needed to deploy existing information technologies quickly using a combination of enterprise-level services and an expanding array of cloud services. Crucial to its effectiveness, the CDS Initiative is developing the technical expertise to move new information technologies from R&D into operational use. This combination enables full, end-to-end support for climate data publishing and data analytics, and affords the flexibility required to meet future and unanticipated needs. Current science efforts being supported by the CDS Initiative include IPPC, OBS4MIP, ANA4MIPS, MERRA II, National Climate Assessment, the Ocean Data Assimilation project, NASA Earth Exchange (NEX), and the RECOVER Burned Area Emergency Response decision support system. Service offerings include an integrated suite of classic technologies (FTP, LAS, THREDDS, ESGF, GRaD-DODS, OPeNDAP, WMS, ArcGIS Server), emerging technologies (iRODS, UVCDAT), and advanced technologies (MERRA Analytic Services, MapReduce, Ontology Services, and the CDS API). This poster will describe the CDS Initiative, provide details about the Initiative's advanced offerings, and layout the CDS Initiative's deployment roadmap.

  13. Comparative analysis of numerical models of pipe handling equipment used in offshore drilling applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pawlus, Witold, E-mail: witold.p.pawlus@ieee.org; Ebbesen, Morten K.; Hansen, Michael R.

    Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and – in some cases – troublesome. Therefore, we present a comparative analysis ofmore » two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.« less

  14. Big data analytics for the Future Circular Collider reliability and availability studies

    NASA Astrophysics Data System (ADS)

    Begy, Volodimir; Apollonio, Andrea; Gutleber, Johannes; Martin-Marquez, Manuel; Niemi, Arto; Penttinen, Jussi-Pekka; Rogova, Elena; Romero-Marin, Antonio; Sollander, Peter

    2017-10-01

    Responding to the European Strategy for Particle Physics update 2013, the Future Circular Collider study explores scenarios of circular frontier colliders for the post-LHC era. One branch of the study assesses industrial approaches to model and simulate the reliability and availability of the entire particle collider complex based on the continuous monitoring of CERN’s accelerator complex operation. The modelling is based on an in-depth study of the CERN injector chain and LHC, and is carried out as a cooperative effort with the HL-LHC project. The work so far has revealed that a major challenge is obtaining accelerator monitoring and operational data with sufficient quality, to automate the data quality annotation and calculation of reliability distribution functions for systems, subsystems and components where needed. A flexible data management and analytics environment that permits integrating the heterogeneous data sources, the domain-specific data quality management algorithms and the reliability modelling and simulation suite is a key enabler to complete this accelerator operation study. This paper describes the Big Data infrastructure and analytics ecosystem that has been put in operation at CERN, serving as the foundation on which reliability and availability analysis and simulations can be built. This contribution focuses on data infrastructure and data management aspects and presents case studies chosen for its validation.

  15. Predicting Student Success using Analytics in Course Learning Management Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olama, Mohammed M; Thakur, Gautam; McNair, Wade

    Educational data analytics is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the student and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present the efforts carried out at Oak Ridge National Laboratory (ORNL) toward conducting predictive analytics to academic data collected from 2009 through 2013 and available in one of the most commonly used learning management systems,more » called Moodle. First, we have identified the data features useful for predicting student outcomes such as students scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total GPA at the same term they enrolled in the course. Then, Logistic Regression and Neural Network predictive models are used to identify students as early as possible that are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy.« less

  16. Predicting student success using analytics in course learning management systems

    NASA Astrophysics Data System (ADS)

    Olama, Mohammed M.; Thakur, Gautam; McNair, Allen W.; Sukumar, Sreenivas R.

    2014-05-01

    Educational data analytics is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the student and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present the efforts carried out at Oak Ridge National Laboratory (ORNL) toward conducting predictive analytics to academic data collected from 2009 through 2013 and available in one of the most commonly used learning management systems, called Moodle. First, we have identified the data features useful for predicting student outcomes such as students' scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total GPA at the same term they enrolled in the course. Then, Logistic Regression and Neural Network predictive models are used to identify students as early as possible that are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy.

  17. Engine environmental effects on composite behavior

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Smith, G. T.

    1980-01-01

    A series of programs were conducted to investigate and develop the application of composite materials to turbojet engines. A significant part of that effort was directed to establishing the impact resistance and defect growth chracteristics of composite materials over the wide range of environmental conditions found in commercial turbojet engine operations. Both analytical and empirical efforts were involved. The experimental programs and the analytical methodology development as well as an evaluation program for the use of composite materials as fan exit guide vanes are summarized.

  18. The Monarch Initiative: an integrative data and analytic platform connecting phenotypes to genotypes across species

    PubMed Central

    Mungall, Christopher J.; McMurry, Julie A.; Köhler, Sebastian; Balhoff, James P.; Borromeo, Charles; Brush, Matthew; Carbon, Seth; Conlin, Tom; Dunn, Nathan; Engelstad, Mark; Foster, Erin; Gourdine, J.P.; Jacobsen, Julius O.B.; Keith, Dan; Laraway, Bryan; Lewis, Suzanna E.; NguyenXuan, Jeremy; Shefchek, Kent; Vasilevsky, Nicole; Yuan, Zhou; Washington, Nicole; Hochheiser, Harry; Groza, Tudor; Smedley, Damian; Robinson, Peter N.; Haendel, Melissa A.

    2017-01-01

    The correlation of phenotypic outcomes with genetic variation and environmental factors is a core pursuit in biology and biomedicine. Numerous challenges impede our progress: patient phenotypes may not match known diseases, candidate variants may be in genes that have not been characterized, model organisms may not recapitulate human or veterinary diseases, filling evolutionary gaps is difficult, and many resources must be queried to find potentially significant genotype–phenotype associations. Non-human organisms have proven instrumental in revealing biological mechanisms. Advanced informatics tools can identify phenotypically relevant disease models in research and diagnostic contexts. Large-scale integration of model organism and clinical research data can provide a breadth of knowledge not available from individual sources and can provide contextualization of data back to these sources. The Monarch Initiative (monarchinitiative.org) is a collaborative, open science effort that aims to semantically integrate genotype–phenotype data from many species and sources in order to support precision medicine, disease modeling, and mechanistic exploration. Our integrated knowledge graph, analytic tools, and web services enable diverse users to explore relationships between phenotypes and genotypes across species. PMID:27899636

  19. The Monarch Initiative: an integrative data and analytic platform connecting phenotypes to genotypes across species

    DOE PAGES

    Mungall, Christopher J.; McMurry, Julie A.; Köhler, Sebastian; ...

    2016-11-29

    The correlation of phenotypic outcomes with genetic variation and environmental factors is a core pursuit in biology and biomedicine. Numerous challenges impede our progress: patient phenotypes may not match known diseases, candidate variants may be in genes that have not been characterized, model organisms may not recapitulate human or veterinary diseases, filling evolutionary gaps is difficult, and many resources must be queried to find potentially significant genotype-phenotype associations. Nonhuman organisms have proven instrumental in revealing biological mechanisms. Advanced informatics tools can identify phenotypically relevant disease models in research and diagnostic contexts. Large-scale integration of model organism and clinical research datamore » can provide a breadth of knowledge not available from individual sources and can provide contextualization of data back to these sources. The Monarch Initiative (monarchinitiative.org) is a collaborative, open science effort that aims to semantically integrate genotype-phenotype data from many species and sources in order to support precision medicine, disease modeling, and mechanistic exploration. Our integrated knowledge graph, analytic tools, and web services enable diverse users to explore relationships between phenotypes and genotypes across species.« less

  20. Comparison of univariate and multivariate calibration for the determination of micronutrients in pellets of plant materials by laser induced breakdown spectrometry

    NASA Astrophysics Data System (ADS)

    Braga, Jez Willian Batista; Trevizan, Lilian Cristina; Nunes, Lidiane Cristina; Rufini, Iolanda Aparecida; Santos, Dário, Jr.; Krug, Francisco José

    2010-01-01

    The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance, but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation.

  1. Are transnational tobacco companies' market access strategies linked to economic development models? A case study of South Korea.

    PubMed

    Lee, Sungkyu; Holden, Chris; Lee, Kelley

    2013-01-01

    Transnational tobacco companies (TTCs) have used varied strategies to access previously closed markets. Using TTCs' efforts to enter the South Korean market from the late 1980s as a case study, this article asks whether there are common patterns in these strategies that relate to the broader economic development models adopted by targeted countries. An analytical review of the existing literature on TTCs' efforts to access emerging markets was conducted to develop hypotheses relating TTCs' strategies to countries' economic development models. A case study of Korea was then undertaken based on analysis of internal tobacco industry documents. Findings were consistent with the hypothesis that TTCs' strategies in Korea were linked to Korea's export-oriented economic development model and its hostile attitude towards foreign investment. A fuller understanding of TTCs' strategies for expansion globally can be derived by locating them within the economic development models of specific countries or regions. Of foremost importance is the need for governments to carefully balance economic and public health policies when considering liberalisation.

  2. Are transnational tobacco companies’ market access strategies linked to economic development models? A case study of South Korea

    PubMed Central

    Lee, Sungkyu; Holden, Chris; Lee, Kelley

    2013-01-01

    Transnational tobacco companies (TTCs) have used varied strategies to access previously closed markets. Using TTCs’ efforts to enter the South Korean market from the late 1980s as a case study, this article asks whether there are common patterns in these strategies that relate to the broader economic development models adopted by targeted countries. An analytical review of the existing literature on TTCs’ efforts to access emerging markets was conducted to develop hypotheses relating TTCs’ strategies to countries’ economic development models. A case study of Korea was then undertaken based on analysis of internal tobacco industry documents. Findings were consistent with the hypothesis that TTCs’ strategies in Korea were linked to Korea’s export-oriented economic development model and its hostile attitude toward foreign investment. A fuller understanding of TTCs’ strategies for expansion globally can be derived by locating them within the economic development models of specific countries or regions. Of foremost importance is the need for governments to carefully balance economic and public health policies when considering liberalisation. PMID:23327486

  3. Fundamental Research on Percussion Drilling: Improved rock mechanics analysis, advanced simulation technology, and full-scale laboratory investigations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael S. Bruno

    This report summarizes the research efforts on the DOE supported research project Percussion Drilling (DE-FC26-03NT41999), which is to significantly advance the fundamental understandings of the physical mechanisms involved in combined percussion and rotary drilling, and thereby facilitate more efficient and lower cost drilling and exploration of hard-rock reservoirs. The project has been divided into multiple tasks: literature reviews, analytical and numerical modeling, full scale laboratory testing and model validation, and final report delivery. Literature reviews document the history, pros and cons, and rock failure physics of percussion drilling in oil and gas industries. Based on the current understandings, a conceptualmore » drilling model is proposed for modeling efforts. Both analytical and numerical approaches are deployed to investigate drilling processes such as drillbit penetration with compression, rotation and percussion, rock response with stress propagation, damage accumulation and failure, and debris transportation inside the annulus after disintegrated from rock. For rock mechanics modeling, a dynamic numerical tool has been developed to describe rock damage and failure, including rock crushing by compressive bit load, rock fracturing by both shearing and tensile forces, and rock weakening by repetitive compression-tension loading. Besides multiple failure criteria, the tool also includes a damping algorithm to dissipate oscillation energy and a fatigue/damage algorithm to update rock properties during each impact. From the model, Rate of Penetration (ROP) and rock failure history can be estimated. For cuttings transport in annulus, a 3D numerical particle flowing model has been developed with aid of analytical approaches. The tool can simulate cuttings movement at particle scale under laminar or turbulent fluid flow conditions and evaluate the efficiency of cutting removal. To calibrate the modeling efforts, a series of full-scale fluid hammer drilling tests, as well as single impact tests, have been designed and executed. Both Berea sandstone and Mancos shale samples are used. In single impact tests, three impacts are sequentially loaded at the same rock location to investigate rock response to repetitive loadings. The crater depth and width are measured as well as the displacement and force in the rod and the force in the rock. Various pressure differences across the rock-indentor interface (i.e. bore pressure minus pore pressure) are used to investigate the pressure effect on rock penetration. For hammer drilling tests, an industrial fluid hammer is used to drill under both underbalanced and overbalanced conditions. Besides calibrating the modeling tool, the data and cuttings collected from the tests indicate several other important applications. For example, different rock penetrations during single impact tests may reveal why a fluid hammer behaves differently with diverse rock types and under various pressure conditions at the hole bottom. On the other hand, the shape of the cuttings from fluid hammer tests, comparing to those from traditional rotary drilling methods, may help to identify the dominant failure mechanism that percussion drilling relies on. If so, encouraging such a failure mechanism may improve hammer performance. The project is summarized in this report. Instead of compiling the information contained in the previous quarterly or other technical reports, this report focuses on the descriptions of tasks, findings, and conclusions, as well as the efforts on promoting percussion drilling technologies to industries including site visits, presentations, and publications. As a part of the final deliveries, the 3D numerical model for rock mechanics is also attached.« less

  4. SINGLE PHASE ANALYTICAL MODELS FOR TERRY TURBINE NOZZLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zhang, Hongbin; Zou, Ling

    All BWR RCIC (Reactor Core Isolation Cooling) systems and PWR AFW (Auxiliary Feed Water) systems use Terry turbine, which is composed of the wheel with turbine buckets and several groups of fixed nozzles and reversing chambers inside the turbine casing. The inlet steam is accelerated through the turbine nozzle and impacts on the wheel buckets, generating work to drive the RCIC pump. As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC systems in Fukushima accidents and extend BWR RCIC and PWR AFW operational range and flexibility, mechanistic models for the Terry turbine, based on Sandiamore » National Laboratories’ original work, has been developed and implemented in the RELAP-7 code to simulate the RCIC system. RELAP-7 is a new reactor system code currently under development with the funding support from U.S. Department of Energy. The RELAP-7 code is a fully implicit code and the preconditioned Jacobian-free Newton-Krylov (JFNK) method is used to solve the discretized nonlinear system. This paper presents a set of analytical models for simulating the flow through the Terry turbine nozzles when inlet fluid is pure steam. The implementation of the models into RELAP-7 will be briefly discussed. In the Sandia model, the turbine bucket inlet velocity is provided according to a reduced-order model, which was obtained from a large number of CFD simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine bucket inlet. The models include both adiabatic expansion process inside the nozzle and free expansion process out of the nozzle to reach the ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input conditions for the Terry Turbine rotor model. The nozzle analytical models were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The cases with two-phase flow at the turbine inlet will be pursued in future work.« less

  5. Field-incidence transmission of treated orthotropic and laminated composite panels

    NASA Technical Reports Server (NTRS)

    Koval, L. R.

    1983-01-01

    In an effort to improve understanding of the phenomenon of noise transmission through the sidewalls of an aircraft fuselage, an analytical model was developed for the field incidence transmission loss of an orthotropic or laminated composite infinite panel with layers of various noise insulation treatments. The model allows for four types of treatments, impervious limp septa, orthotropic trim panels, porous blankets, and air spaces, while it also takes into account the effects of forward speed. Agreement between the model and transmission loss data for treated panels is seen to be fairly good overall. In comparison with transmission loss data for untreated composite panels, excellent agreement occurred.

  6. Comparing the results of an analytical model of the no-vent fill process with no-vent fill test results for a 4.96 cubic meters (175 cubic feet) tank

    NASA Technical Reports Server (NTRS)

    Taylor, William J.; Chato, David J.

    1993-01-01

    The NASA Lewis Research Center (NASA/LeRC) have been investigating a no-vent fill method for refilling cryogenic storage tanks in low gravity. Analytical modeling based on analyzing the heat transfer of a droplet has successfully represented the process in 0.034 m and 0.142 cubic m commercial dewars using liquid nitrogen and hydrogen. Recently a large tank (4.96 cubic m) was tested with hydrogen. This lightweight tank is representative of spacecraft construction. This paper presents efforts to model the large tank test data. The droplet heat transfer model is found to over predict the tank pressure level when compared to the large tank data. A new model based on equilibrium thermodynamics has been formulated. This new model is compared to the published large scale tank's test results as well as some additional test runs with the same equipment. The results are shown to match the test results within the measurement uncertainty of the test data except for the initial transient wall cooldown where it is conservative (i.e., overpredicts the initial pressure spike found in this time frame).

  7. Error-analysis and comparison to analytical models of numerical waveforms produced by the NRAR Collaboration

    NASA Astrophysics Data System (ADS)

    Hinder, Ian; Buonanno, Alessandra; Boyle, Michael; Etienne, Zachariah B.; Healy, James; Johnson-McDaniel, Nathan K.; Nagar, Alessandro; Nakano, Hiroyuki; Pan, Yi; Pfeiffer, Harald P.; Pürrer, Michael; Reisswig, Christian; Scheel, Mark A.; Schnetter, Erik; Sperhake, Ulrich; Szilágyi, Bela; Tichy, Wolfgang; Wardell, Barry; Zenginoğlu, Anıl; Alic, Daniela; Bernuzzi, Sebastiano; Bode, Tanja; Brügmann, Bernd; Buchman, Luisa T.; Campanelli, Manuela; Chu, Tony; Damour, Thibault; Grigsby, Jason D.; Hannam, Mark; Haas, Roland; Hemberger, Daniel A.; Husa, Sascha; Kidder, Lawrence E.; Laguna, Pablo; London, Lionel; Lovelace, Geoffrey; Lousto, Carlos O.; Marronetti, Pedro; Matzner, Richard A.; Mösta, Philipp; Mroué, Abdul; Müller, Doreen; Mundim, Bruno C.; Nerozzi, Andrea; Paschalidis, Vasileios; Pollney, Denis; Reifenberger, George; Rezzolla, Luciano; Shapiro, Stuart L.; Shoemaker, Deirdre; Taracchini, Andrea; Taylor, Nicholas W.; Teukolsky, Saul A.; Thierfelder, Marcus; Witek, Helvi; Zlochower, Yosef

    2013-01-01

    The Numerical-Relativity-Analytical-Relativity (NRAR) collaboration is a joint effort between members of the numerical relativity, analytical relativity and gravitational-wave data analysis communities. The goal of the NRAR collaboration is to produce numerical-relativity simulations of compact binaries and use them to develop accurate analytical templates for the LIGO/Virgo Collaboration to use in detecting gravitational-wave signals and extracting astrophysical information from them. We describe the results of the first stage of the NRAR project, which focused on producing an initial set of numerical waveforms from binary black holes with moderate mass ratios and spins, as well as one non-spinning binary configuration which has a mass ratio of 10. All of the numerical waveforms are analysed in a uniform and consistent manner, with numerical errors evaluated using an analysis code created by members of the NRAR collaboration. We compare previously-calibrated, non-precessing analytical waveforms, notably the effective-one-body (EOB) and phenomenological template families, to the newly-produced numerical waveforms. We find that when the binary's total mass is ˜100-200M⊙, current EOB and phenomenological models of spinning, non-precessing binary waveforms have overlaps above 99% (for advanced LIGO) with all of the non-precessing-binary numerical waveforms with mass ratios ⩽4, when maximizing over binary parameters. This implies that the loss of event rate due to modelling error is below 3%. Moreover, the non-spinning EOB waveforms previously calibrated to five non-spinning waveforms with mass ratio smaller than 6 have overlaps above 99.7% with the numerical waveform with a mass ratio of 10, without even maximizing on the binary parameters.

  8. Project-Based Curriculum for Teaching Analytical Design to Freshman Engineering Students via Reconfigurable Trebuchets

    ERIC Educational Resources Information Center

    Herber, Daniel R.; Deshmukh, Anand P.; Mitchell, Marlon E.; Allison, James T.

    2016-01-01

    This paper presents an effort to revitalize a large introductory engineering course for incoming freshman students that teaches them analytical design through a project-based curriculum. This course was completely transformed from a seminar-based to a project-based course that integrates hands-on experimentation with analytical work. The project…

  9. Integrated optomechanical analysis and testing software development at MIT Lincoln Laboratory

    NASA Astrophysics Data System (ADS)

    Stoeckel, Gerhard P.; Doyle, Keith B.

    2013-09-01

    Advanced analytical software capabilities are being developed to advance the design of prototypical hardware in the Engineering Division at MIT Lincoln Laboratory. The current effort is focused on the integration of analysis tools tailored to the work flow, organizational structure, and current technology demands. These tools are being designed to provide superior insight into the interdisciplinary behavior of optical systems and enable rapid assessment and execution of design trades to optimize the design of optomechanical systems. The custom software architecture is designed to exploit and enhance the functionality of existing industry standard commercial software, provide a framework for centralizing internally developed tools, and deliver greater efficiency, productivity, and accuracy through standardization, automation, and integration. Specific efforts have included the development of a feature-rich software package for Structural-Thermal-Optical Performance (STOP) modeling, advanced Line Of Sight (LOS) jitter simulations, and improved integration of dynamic testing and structural modeling.

  10. Satellite-tracking and earth-dynamics research programs. [NASA Programs on satellite orbits and satellite ground tracks of geodetic satellites

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Observations and research progress of the Smithsonian Astrophysical Observatory are reported. Satellite tracking networks (ground stations) are discussed and equipment (Baker-Nunn cameras) used to observe the satellites is described. The improvement of the accuracy of a laser ranging system of the ground stations is discussed. Also, research efforts in satellite geodesy (tides, gravity anomalies, plate tectonics) is discussed. The use of data processing for geophysical data is examined, and a data base for the Earth and Ocean Physics Applications Program is proposed. Analytical models of the earth's motion (computerized simulation) are described and the computation (numerical integration and algorithms) of satellite orbits affected by the earth's albedo, using computer techniques, is also considered. Research efforts in the study of the atmosphere are examined (the effect of drag on satellite motion), and models of the atmosphere based on satellite data are described.

  11. A small, single stage orifice pulse tube cryocooler demonstration

    NASA Technical Reports Server (NTRS)

    Hendricks, John B.

    1990-01-01

    This final report summarizes and presents the analytical and experimental progress in the present effort. The principal objective of this effort was the demonstration of a 0.25 Watt, 80 Kelvin orifice pulse tube refrigerator. The experimental apparatus is described. The design of a partially optimized pulse tube refrigerator is included. The refrigerator demonstrates an ultimate temperature of 77 K, has a projected cooling power of 0.18 Watts at 80 K, and has a measured cooling power of 1 Watt at 97 K, with an electrical efficiency of 250 Watts/Watt, much better than previous pulse tube refrigerators. A model of the pulse tube refrigerator that provides estimates of pressure ratio and mass flow within the pulse tube refrigerator, based on component physical characteristics is included. A model of a pulse tube operation based on generalized analysis which is adequate to support local optimization of existing designs is included. A model of regenerator performance based on an analogy to counterflow heat exchangers is included.

  12. Comparison of UWCC MOX fuel measurements to MCNP-REN calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, M.; Baker, M.; Jie, R.

    1998-12-31

    The development of neutron coincidence counting has greatly improved the accuracy and versatility of neutron-based techniques to assay fissile materials. Today, the shift register analyzer connected to either a passive or active neutron detector is widely used by both domestic and international safeguards organizations. The continued development of these techniques and detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model, as it is currently used, fails to accurately predict detector response in highly multiplying mediums such as mixed-oxide (MOX) lightmore » water reactor fuel assemblies. For this reason, efforts have been made to modify the currently used Monte Carlo codes and to develop new analytical methods so that this model is not required to predict detector response. The authors describe their efforts to modify a widely used Monte Carlo code for this purpose and also compare calculational results with experimental measurements.« less

  13. Investigation of Dynamic Aerodynamics and Control of Wind Turbine Sections Under Relevant Inflow/Blade Attitude Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naughton, Jonathan W.

    2014-08-05

    The growth of wind turbines has led to highly variable loading on the blades. Coupled with the relative reduced stiffness of longer blades, the need to control loading on the blades has become important. One method of controlling loads and maximizing energy extraction is local control of the flow on the wind turbine blades. The goal of the present work was to better understand the sources of the unsteady loading and then to control them. This is accomplished through an experimental effort to characterize the unsteadiness and the effect of a Gurney flap on the flow, as well as anmore » analytical effort to develop control approaches. It was planned to combine these two efforts to demonstrate control of a wind tunnel test model, but that final piece still remains to be accomplished.« less

  14. Curriculum Mapping with Academic Analytics in Medical and Healthcare Education.

    PubMed

    Komenda, Martin; Víta, Martin; Vaitsis, Christos; Schwarz, Daniel; Pokorná, Andrea; Zary, Nabil; Dušek, Ladislav

    2015-01-01

    No universal solution, based on an approved pedagogical approach, exists to parametrically describe, effectively manage, and clearly visualize a higher education institution's curriculum, including tools for unveiling relationships inside curricular datasets. We aim to solve the issue of medical curriculum mapping to improve understanding of the complex structure and content of medical education programs. Our effort is based on the long-term development and implementation of an original web-based platform, which supports an outcomes-based approach to medical and healthcare education and is suitable for repeated updates and adoption to curriculum innovations. We adopted data exploration and visualization approaches in the context of medical curriculum innovations in higher education institutions domain. We have developed a robust platform, covering detailed formal metadata specifications down to the level of learning units, interconnections, and learning outcomes, in accordance with Bloom's taxonomy and direct links to a particular biomedical nomenclature. Furthermore, we used selected modeling techniques and data mining methods to generate academic analytics reports from medical curriculum mapping datasets. We present a solution that allows users to effectively optimize a curriculum structure that is described with appropriate metadata, such as course attributes, learning units and outcomes, a standardized vocabulary nomenclature, and a tree structure of essential terms. We present a case study implementation that includes effective support for curriculum reengineering efforts of academics through a comprehensive overview of the General Medicine study program. Moreover, we introduce deep content analysis of a dataset that was captured with the use of the curriculum mapping platform; this may assist in detecting any potentially problematic areas, and hence it may help to construct a comprehensive overview for the subsequent global in-depth medical curriculum inspection. We have proposed, developed, and implemented an original framework for medical and healthcare curriculum innovations and harmonization, including: planning model, mapping model, and selected academic analytics extracted with the use of data mining.

  15. Curriculum Mapping with Academic Analytics in Medical and Healthcare Education

    PubMed Central

    Komenda, Martin; Víta, Martin; Vaitsis, Christos; Schwarz, Daniel; Pokorná, Andrea; Zary, Nabil; Dušek, Ladislav

    2015-01-01

    Background No universal solution, based on an approved pedagogical approach, exists to parametrically describe, effectively manage, and clearly visualize a higher education institution’s curriculum, including tools for unveiling relationships inside curricular datasets. Objective We aim to solve the issue of medical curriculum mapping to improve understanding of the complex structure and content of medical education programs. Our effort is based on the long-term development and implementation of an original web-based platform, which supports an outcomes-based approach to medical and healthcare education and is suitable for repeated updates and adoption to curriculum innovations. Methods We adopted data exploration and visualization approaches in the context of medical curriculum innovations in higher education institutions domain. We have developed a robust platform, covering detailed formal metadata specifications down to the level of learning units, interconnections, and learning outcomes, in accordance with Bloom’s taxonomy and direct links to a particular biomedical nomenclature. Furthermore, we used selected modeling techniques and data mining methods to generate academic analytics reports from medical curriculum mapping datasets. Results We present a solution that allows users to effectively optimize a curriculum structure that is described with appropriate metadata, such as course attributes, learning units and outcomes, a standardized vocabulary nomenclature, and a tree structure of essential terms. We present a case study implementation that includes effective support for curriculum reengineering efforts of academics through a comprehensive overview of the General Medicine study program. Moreover, we introduce deep content analysis of a dataset that was captured with the use of the curriculum mapping platform; this may assist in detecting any potentially problematic areas, and hence it may help to construct a comprehensive overview for the subsequent global in-depth medical curriculum inspection. Conclusions We have proposed, developed, and implemented an original framework for medical and healthcare curriculum innovations and harmonization, including: planning model, mapping model, and selected academic analytics extracted with the use of data mining. PMID:26624281

  16. System identification of the JPL micro-precision interferometer truss - Test-analysis reconciliation

    NASA Technical Reports Server (NTRS)

    Red-Horse, J. R.; Marek, E. L.; Levine-West, M.

    1993-01-01

    The JPL Micro-Precision Interferometer (MPI) is a testbed for studying the use of control-structure interaction technology in the design of space-based interferometers. A layered control architecture will be employed to regulate the interferometer optical system to tolerances in the nanometer range. An important aspect of designing and implementing the control schemes for such a system is the need for high fidelity, test-verified analytical structural models. This paper focuses on one aspect of the effort to produce such a model for the MPI structure, test-analysis model reconciliation. Pretest analysis, modal testing, and model refinement results are summarized for a series of tests at both the component and full system levels.

  17. Free-floating dual-arm robots for space assembly

    NASA Technical Reports Server (NTRS)

    Agrawal, Sunil Kumar; Chen, M. Y.

    1994-01-01

    Freely moving systems in space conserve linear and angular momentum. As moving systems collide, the velocities get altered due to transfer of momentum. The development of strategies for assembly in a free-floating work environment requires a good understanding of primitives such as self motion of the robot, propulsion of the robot due to onboard thrusters, docking of the robot, retrieval of an object from a collection of objects, and release of an object in an object pool. The analytics of such assemblies involve not only kinematics and rigid body dynamics but also collision and impact dynamics of multibody systems. In an effort to understand such assemblies in zero gravity space environment, we are currently developing at Ohio University a free-floating assembly facility with a dual-arm planar robot equipped with thrusters, a free-floating material table, and a free-floating assembly table. The objective is to pick up workpieces from the material table and combine them into prespecified assemblies. This paper presents analytical models of assembly primitives and strategies for overall assembly. A computer simulation of an assembly is developed using the analytical models. The experiment facility will be used to verify the theoretical predictions.

  18. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.

    1997-01-01

    Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineer's perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for "graceful" rather than catastrophic failure. When loaded in the fiber direction, these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, NiAl single crystal turbine blade material; map a simplistic failure strength envelope of the material; develop a statistically based reliability computer algorithm, verify the reliability model and computer algorithm, and model stator vanes for rig tests. Thus establishing design protocols that enable the engineer to analyze and predict the mechanical behavior of ceramic composites and intermetallics would mitigate the prototype (trial and error) approach currently used by the engineering community. The primary objective of the research effort supported by this short term grant is the continued creation of enabling technologies for the macroanalysis of components fabricated from ceramic composites and intermetallic material systems. The creation of enabling technologies aids in shortening the product development cycle of components fabricated from the new high technology materials.

  19. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.

    1997-01-01

    Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal, and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineers perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(sub x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for 'graceful' rather than catastrophic failure. When loaded in the fiber direction these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, NiAl single crystal turbine blade material; map a simplistic future strength envelope of the material; develop a statistically based reliability computer algorithm; verify the reliability model and computer algorithm-, and model stator vanes for rig tests. Thus establishing design protocols that enable the engineer to analyze and predict the mechanical behavior of ceramic composites and intermetallics would mitigate the prototype (trial and error) approach currently used by the engineering community. The primary objective of the research effort supported by this short term grant is the continued creation of enabling technologies for the macro-analysis of components fabricated from ceramic composites and intermetallic material systems. The creation of enabling technologies aids in shortening the product development cycle of components fabricated from the new high technology materials.

  20. Getting Open Source Right for Big Data Analytics: Software Sharing, Governance, Collaboration and Most of All, Fun!

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.

    2013-12-01

    A wave of open source big data analytic infrastructure is currently shaping government, private sector, and academia. Projects are consuming, adapting, and contributing back to various ecosystems of software e.g., the Apache Hadoop project and its ecosystem of related efforts including Hive, HBase, Pig, Oozie, Ambari, Knox, Tez and Yarn, to name a few; the Berkeley AMPLab stack which includes Spark, Shark, Mesos, Tachyon, BlinkDB, MLBase, and other emerging efforts; MapR and its related stack of technologies, offerings from commercial companies building products around these tools e.g., Hortonworks Data Platform (HDP), Cloudera's CDH project, etc. Though the technologies all offer different capabilities including low latency support/in-memory, versus record oriented file I/O, high availability, support for the Map Reduce programming paradigm or other dataflow/workflow constructs, there is a common thread that binds these products - they are all released under an open source license e.g., Apache2, MIT, BSD, GPL/LGPL, etc.; all thrive in various ecosystems, such as Apache, or Berkeley AMPLab; all are developed collaboratively, and all technologies provide plug in architecture models and methodologies for allowing others to contribute, and participate via various community models. This talk will cover the open source aspects and governance aspects of the aforementioned Big Data ecosystems and point out the differences, subtleties, and implications of those differences. The discussion will be by example, using several national deployments and Big Data initiatives stemming from the Administration including DARPA's XDATA program; NASA's CMAC program; NSF's EarthCube and geosciences BigData projects. Lessons learned from these efforts in terms of the open source aspects of these technologies will help guide the AGU community in their use, deployment and understanding.

  1. Confinement Driven by Scalar Field in 4d Non Abelian Gauge Theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chabab, Mohamed

    2007-01-12

    We review some of the most recent work on confinement in 4d gauge theories with a massive scalar field (dilaton). Emphasis is put on the derivation of confining analytical solutions to the Coulomb problem versus dilaton effective couplings to gauge terms. It is shown that these effective theories can be relevant to model quark confinement and may shed some light on confinement mechanism. Moreover, the study of interquark potential, derived from Dick Model, in the heavy meson sector proves that phenomenological investigation of tmechanism is more than justified and deserves more efforts.

  2. Use of MSC/NASTRAN for the thermal analysis of the Space Shuttle Orbiter braking system

    NASA Technical Reports Server (NTRS)

    Shu, James; Mccann, David

    1987-01-01

    A description is given of the thermal modeling and analysis effort being conducted to investigate the transient temperature and thermal stress characteristics of the Space Shuttle Orbiter brake components and subsystems. Models are constructed of the brake stator as well as of the entire brake assembly to analyze the temperature distribution and thermal stress during the landing and braking process. These investigations are carried out on a UNIVAC computer system with MSC/NASTRAN Version 63. Analytical results and solution methods are presented and comparisons are made with SINDA results.

  3. A Framework for Widespread Replication of a Highly Spatially Resolved Childhood Lead Exposure Risk Model

    PubMed Central

    Kim, Dohyeong; Galeano, M. Alicia Overstreet; Hull, Andrew; Miranda, Marie Lynn

    2008-01-01

    Background Preventive approaches to childhood lead poisoning are critical for addressing this longstanding environmental health concern. Moreover, increasing evidence of cognitive effects of blood lead levels < 10 μg/dL highlights the need for improved exposure prevention interventions. Objectives Geographic information system–based childhood lead exposure risk models, especially if executed at highly resolved spatial scales, can help identify children most at risk of lead exposure, as well as prioritize and direct housing and health-protective intervention programs. However, developing highly resolved spatial data requires labor-and time-intensive geocoding and analytical processes. In this study we evaluated the benefit of increased effort spent geocoding in terms of improved performance of lead exposure risk models. Methods We constructed three childhood lead exposure risk models based on established methods but using different levels of geocoded data from blood lead surveillance, county tax assessors, and the 2000 U.S. Census for 18 counties in North Carolina. We used the results to predict lead exposure risk levels mapped at the individual tax parcel unit. Results The models performed well enough to identify high-risk areas for targeted intervention, even with a relatively low level of effort on geocoding. Conclusions This study demonstrates the feasibility of widespread replication of highly spatially resolved childhood lead exposure risk models. The models guide resource-constrained local health and housing departments and community-based organizations on how best to expend their efforts in preventing and mitigating lead exposure risk in their communities. PMID:19079729

  4. Development and Implementation of Mechanistic Terry Turbine Models in RELAP-7 to Simulate RCIC Normal Operation Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Haihua; Zou, Ling; Zhang, Hongbin

    As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC (Reactor Core Isolation Cooling) systems in Fukushima accidents and extend BWR RCIC and PWR AFW (Auxiliary Feed Water) operational range and flexibility, mechanistic models for the Terry turbine, based on Sandia’s original work [1], have been developed and implemented in the RELAP-7 code to simulate the RCIC system. In 2016, our effort has been focused on normal working conditions of the RCIC system. More complex off-design conditions will be pursued in later years when more data are available. In the Sandia model, the turbine stator inletmore » velocity is provided according to a reduced-order model which was obtained from a large number of CFD (computational fluid dynamics) simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine stator inlet. The models include both an adiabatic expansion process inside the nozzle and a free expansion process outside of the nozzle to ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input information for the Terry turbine rotor model. The analytical models for the nozzle were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The newly developed nozzle models and modified turbine rotor model according to the Sandia’s original work have been implemented into RELAP-7, along with the original Sandia Terry turbine model. A new pump model has also been developed and implemented to couple with the Terry turbine model. An input model was developed to test the Terry turbine RCIC system, which generates reasonable results. Both the INL RCIC model and the Sandia RCIC model produce results matching major rated parameters such as the rotational speed, pump torque, and the turbine shaft work for the normal operation condition. The Sandia model is more sensitive to the turbine outlet pressure than the INL model. The next step will be further refining the Terry turbine models by including two-phase flow cases so that off-design conditions can be simulated. The pump model could also be enhanced with the use of the homologous curves.« less

  5. Sensor failure detection for jet engines

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.

    1988-01-01

    The use of analytical redundancy to improve gas turbine engine control system reliability through sensor failure detection, isolation, and accommodation is surveyed. Both the theoretical and application papers that form the technology base of turbine engine analytical redundancy research are discussed. Also, several important application efforts are reviewed. An assessment of the state-of-the-art in analytical redundancy technology is given.

  6. A Petri-net coordination model for an intelligent mobile robot

    NASA Technical Reports Server (NTRS)

    Wang, F.-Y.; Kyriakopoulos, K. J.; Tsolkas, A.; Saridis, G. N.

    1990-01-01

    The authors present a Petri net model of the coordination level of an intelligent mobile robot system (IMRS). The purpose of this model is to specify the integration of the individual efforts on path planning, supervisory motion control, and vision systems that are necessary for the autonomous operation of the mobile robot in a structured dynamic environment. This is achieved by analytically modeling the various units of the system as Petri net transducers and explicitly representing the task precedence and information dependence among them. The model can also be used to simulate the task processing and to evaluate the efficiency of operations and the responsibility of decisions in the coordination level of the IMRS. Some simulation results on the task processing and learning are presented.

  7. Complete modeling of rotary ultrasonic motors actuated by traveling flexural waves

    NASA Astrophysics Data System (ADS)

    Bao, Xiaoqi; Bar-Cohen, Yoseph

    2000-06-01

    Ultrasonic rotary motors have the potential to meet this NASA need and they are developed as actuators for miniature telerobotic applications. These motors are being adapted for operation at the harsh space environments that include cryogenic temperatures and vacuum and analytical tools for the design of efficient motors are being developed. A hybrid analytical model was developed to address a complete ultrasonic motor as a system. Included in this model is the influence of the rotor dynamics, which was determined experimentally to be important to the motor performance. The analysis employs a 3D finite element model to express the dynamic characteristics of the stator with piezoelectric elements and the rotor. The details of the stator including the teeth, piezoelectric ceramic, geometry, bonding layer, etc. are included to support practical USM designs. A brush model is used for the interface layer and Coulomb's law for the friction between the stator and the rotor. The theoretical predictions were corroborated experimentally for the motor. In parallel, efforts have been made to determine the thermal and vacuum performance of these motors. To explore telerobotic applications for USMs a robotic arm was constructed with such motors.

  8. Functional neuroimaging correlates of thinking flexibility and knowledge structure in memory: Exploring the relationships between clinical reasoning and diagnostic thinking.

    PubMed

    Durning, Steven J; Costanzo, Michelle E; Beckman, Thomas J; Artino, Anthony R; Roy, Michael J; van der Vleuten, Cees; Holmboe, Eric S; Lipner, Rebecca S; Schuwirth, Lambert

    2016-06-01

    Diagnostic reasoning involves the thinking steps up to and including arrival at a diagnosis. Dual process theory posits that a physician's thinking is based on both non-analytic or fast, subconscious thinking and analytic thinking that is slower, more conscious, effortful and characterized by comparing and contrasting alternatives. Expertise in clinical reasoning may relate to the two dimensions measured by the diagnostic thinking inventory (DTI): memory structure and flexibility in thinking. Explored the functional magnetic resonance imaging (fMRI) correlates of these two aspects of the DTI: memory structure and flexibility of thinking. Participants answered and reflected upon multiple-choice questions (MCQs) during fMRI. A DTI was completed shortly after the scan. The brain processes associated with the two dimensions of the DTI were correlated with fMRI phases - assessing flexibility in thinking during analytical clinical reasoning, memory structure during non-analytical clinical reasoning and the total DTI during both non-analytical and analytical reasoning in experienced physicians. Each DTI component was associated with distinct functional neuroanatomic activation patterns, particularly in the prefrontal cortex. Our findings support diagnostic thinking conceptual models and indicate mechanisms through which cognitive demands may induce functional adaptation within the prefrontal cortex. This provides additional objective validity evidence for the use of the DTI in medical education and practice settings.

  9. Rational quality assessment procedure for less-investigated herbal medicines: Case of a Congolese antimalarial drug with an analytical report.

    PubMed

    Tshitenge, Dieudonné Tshitenge; Ioset, Karine Ndjoko; Lami, José Nzunzu; Ndelo-di-Phanzu, Josaphat; Mufusama, Jean-Pierre Koy Sita; Bringmann, Gerhard

    2016-04-01

    Herbal medicines are the most globally used type of medical drugs. Their high cultural acceptability is due to the experienced safety and efficiency over centuries of use. Many of them are still phytochemically less-investigated, and are used without standardization or quality control. Choosing SIROP KILMA, an authorized Congolese antimalarial phytomedicine, as a model case, our study describes an interdisciplinary approach for a rational quality assessment of herbal drugs in general. It combines an authentication step of the herbal remedy prior to any fingerprinting, the isolation of the major constituents, the development and validation of an HPLC-DAD analytical method with internal markers, and the application of the method to several batches of the herbal medicine (here KILMA) thus permitting the establishment of a quantitative fingerprint. From the constitutive plants of KILMA, acteoside, isoacteoside, stachannin A, and pectolinarigenin-7-O-glucoside were isolated, and acteoside was used as the prime marker for the validation of an analytical method. This study contributes to the efforts of the WHO for the establishment of standards enabling the analytical evaluation of herbal materials. Moreover, the paper describes the first phytochemical and analytical report on a marketed Congolese phytomedicine. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Development of a Risk-Based Comparison Methodology of Carbon Capture Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Dalton, Angela C.; Dale, Crystal

    2014-06-01

    Given the varying degrees of maturity among existing carbon capture (CC) technology alternatives, an understanding of the inherent technical and financial risk and uncertainty associated with these competing technologies is requisite to the success of carbon capture as a viable solution to the greenhouse gas emission challenge. The availability of tools and capabilities to conduct rigorous, risk–based technology comparisons is thus highly desirable for directing valuable resources toward the technology option(s) with a high return on investment, superior carbon capture performance, and minimum risk. To address this research need, we introduce a novel risk-based technology comparison method supported by anmore » integrated multi-domain risk model set to estimate risks related to technological maturity, technical performance, and profitability. Through a comparison between solid sorbent and liquid solvent systems, we illustrate the feasibility of estimating risk and quantifying uncertainty in a single domain (modular analytical capability) as well as across multiple risk dimensions (coupled analytical capability) for comparison. This method brings technological maturity and performance to bear on profitability projections, and carries risk and uncertainty modeling across domains via inter-model sharing of parameters, distributions, and input/output. The integration of the models facilitates multidimensional technology comparisons within a common probabilistic risk analysis framework. This approach and model set can equip potential technology adopters with the necessary computational capabilities to make risk-informed decisions about CC technology investment. The method and modeling effort can also be extended to other industries where robust tools and analytical capabilities are currently lacking for evaluating nascent technologies.« less

  11. Answer first: Applying the heuristic-analytic theory of reasoning to examine student intuitive thinking in the context of physics

    NASA Astrophysics Data System (ADS)

    Kryjevskaia, Mila; Stetzer, MacKenzie R.; Grosz, Nathaniel

    2014-12-01

    We have applied the heuristic-analytic theory of reasoning to interpret inconsistencies in student reasoning approaches to physics problems. This study was motivated by an emerging body of evidence that suggests that student conceptual and reasoning competence demonstrated on one task often fails to be exhibited on another. Indeed, even after instruction specifically designed to address student conceptual and reasoning difficulties identified by rigorous research, many undergraduate physics students fail to build reasoning chains from fundamental principles even though they possess the required knowledge and skills to do so. Instead, they often rely on a variety of intuitive reasoning strategies. In this study, we developed and employed a methodology that allowed for the disentanglement of student conceptual understanding and reasoning approaches through the use of sequences of related questions. We have shown that the heuristic-analytic theory of reasoning can be used to account for, in a mechanistic fashion, the observed inconsistencies in student responses. In particular, we found that students tended to apply their correct ideas in a selective manner that supported a specific and likely anticipated conclusion while neglecting to employ the same ideas to refute an erroneous intuitive conclusion. The observed reasoning patterns were consistent with the heuristic-analytic theory, according to which reasoners develop a "first-impression" mental model and then construct an argument in support of the answer suggested by this model. We discuss implications for instruction and argue that efforts to improve student metacognition, which serves to regulate the interaction between intuitive and analytical reasoning, is likely to lead to improved student reasoning.

  12. Flexible manipulator control experiments and analysis

    NASA Technical Reports Server (NTRS)

    Yurkovich, S.; Ozguner, U.; Tzes, A.; Kotnik, P. T.

    1987-01-01

    Modeling and control design for flexible manipulators, both from an experimental and analytical viewpoint, are described. From the application perspective, an ongoing effort within the laboratory environment at the Ohio State University, where experimentation on a single link flexible arm is underway is described. Several unique features of this study are described here. First, the manipulator arm is slewed by a direct drive dc motor and has a rigid counterbalance appendage. Current experimentation is from two viewpoints: (1) rigid body slewing and vibration control via actuation with the hub motor, and (2) vibration suppression through the use of structure-mounted proof-mass actuation at the tip. Such an application to manipulator control is of interest particularly in design of space-based telerobotic control systems, but has received little attention to date. From an analytical viewpoint, parameter estimation techniques within the closed-loop for self-tuning adaptive control approaches are discussed. Also introduced is a control approach based on output feedback and frequency weighting to counteract effects of spillover in reduced-order model design. A model of the flexible manipulator based on experimental measurements is evaluated for such estimation and control approaches.

  13. Pilot testing of SHRP 2 reliability data and analytical products: Minnesota.

    DOT National Transportation Integrated Search

    2015-01-01

    The Minnesota pilot site has undertaken an effort to test data and analytical tools developed through the Strategic Highway Research Program (SHRP) 2 Reliability focus area. The purpose of these tools is to facilitate the improvement of travel time r...

  14. A Comparison of Lifting-Line and CFD Methods with Flight Test Data from a Research Puma Helicopter

    NASA Technical Reports Server (NTRS)

    Bousman, William G.; Young, Colin; Toulmay, Francois; Gilbert, Neil E.; Strawn, Roger C.; Miller, Judith V.; Maier, Thomas H.; Costes, Michel; Beaumier, Philippe

    1996-01-01

    Four lifting-line methods were compared with flight test data from a research Puma helicopter and the accuracy assessed over a wide range of flight speeds. Hybrid Computational Fluid Dynamics (CFD) methods were also examined for two high-speed conditions. A parallel analytical effort was performed with the lifting-line methods to assess the effects of modeling assumptions and this provided insight into the adequacy of these methods for load predictions.

  15. Summary of nozzle-exhaust plume flowfield analyses related to space shuttle applications

    NASA Technical Reports Server (NTRS)

    Penny, M. M.

    1975-01-01

    Exhaust plume shape simulation is studied, with the major effort directed toward computer program development and analytical support of various plume related problems associated with the space shuttle. Program development centered on (1) two-phase nozzle-exhaust plume flows, (2) plume impingement, and (3) support of exhaust plume simulation studies. Several studies were also conducted to provide full-scale data for defining exhaust plume simulation criteria. Model nozzles used in launch vehicle test were analyzed and compared to experimental calibration data.

  16. Economic model for QoS guarantee on the Internet

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Wei, Jiaolong

    2001-09-01

    This paper describes a QoS guarantee architecture suited for best-effort environments, based on ideas from microeconomics and non-cooperative game theory. First, an analytic model is developed for the study of the resource allocation in the Internet. Then we show that with a simple pricing mechanism (from network implementation and users' points-of-view), we were able to provide QoS guarantee at per flow level without resource allocation or complicated scheduling mechanisms or maintaining per flow state in the core network. Unlike the previous work on this area, we extend the basic model to support inelastic applications which require minimum bandwidth guarantees for a given time period by introducing derivative market.

  17. Aspirating Seal Development: Analytical Modeling and Seal Test Rig

    NASA Technical Reports Server (NTRS)

    Bagepalli, Bharat

    1996-01-01

    This effort is to develop large diameter (22 - 36 inch) Aspirating Seals for application in aircraft engines. Stein Seal Co. will be fabricating the 36-inch seal(s) for testing. GE's task is to establish a thorough understanding of the operation of Aspirating Seals through analytical modeling and full-scale testing. The two primary objectives of this project are to develop the analytical models of the aspirating seal system, to upgrade using GE's funds, GE's 50-inch seal test rig for testing the Aspirating Seal (back-to-back with a corresponding brush seal), test the aspirating seal(s) for seal closure, tracking and maneuver transients (tilt) at operating pressures and temperatures, and validate the analytical model. The objective of the analytical model development is to evaluate the transient and steady-state dynamic performance characteristics of the seal designed by Stein. The transient dynamic model uses a multi-body system approach: the Stator, Seal face and the rotor are treated as individual bodies with relative degrees of freedom. Initially, the thirty-six springs are represented as a single one trying to keep open the aspirating face. Stops (Contact elements) are provided between the stator and the seal (to compensate the preload in the fully-open position) and between the rotor face and Seal face (to detect rub). The secondary seal is considered as part of the stator. The film's load, damping and stiffness characteristics as functions of pressure and clearance are evaluated using a separate (NASA) code GFACE. Initially, a laminar flow theory is used. Special two-dimensional interpolation routines are written to establish exact film load and damping values at each integration time step. Additionally, other user-routines are written to read-in actual pressure, rpm, stator-growth and rotor growth data and, later, to transfer these as appropriate loads/motions in the system-dynamic model. The transient dynamic model evaluates the various motions, clearances and forces as the seals are subjected to different aircraft maneuvers: Windmilling restart; start-ground idle; ground idle-takeoff; takeoff-burst chop, etc. Results of this model show that the seal closes appropriately and does not ram into the rotor for all of the conditions analyzed. The rig upgrade design for testing Aspirating Seals has been completed. Long lead-time items (forgings, etc.) have been ordered.

  18. GENERALIZED VISCOPLASTIC MODELING OF DEBRIS FLOW.

    USGS Publications Warehouse

    Chen, Cheng-lung

    1988-01-01

    The earliest model developed by R. A. Bagnold was based on the concept of the 'dispersive' pressure generated by grain collisions. Some efforts have recently been made by theoreticians in non-Newtonian fluid mechanics to modify or improve Bagnold's concept or model. A viable rheological model should consist both of a rate-independent part and a rate-dependent part. A generalized viscoplastic fluid (GVF) model that has both parts as well as two major rheological properties (i. e. , the normal stress effect and soil yield criterion) is shown to be sufficiently accurate, yet practical for general use in debris-flow modeling. In fact, Bagnold's model is found to be only a particular case of the GVF model. analytical solutions for (steady) uniform debris flows in wide channels are obtained from the GVF model based on Bagnold's simplified assumption of constant grain concentration.

  19. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard

    1991-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/ mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  20. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Bartos, Karen F.; Fite, E. B.; Sharp, G. R.

    1992-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  1. A reputation for success (or failure): the association of peer academic reputations with academic self-concept, effort, and performance across the upper elementary grades.

    PubMed

    Gest, Scott D; Rulison, Kelly L; Davidson, Alice J; Welsh, Janet A

    2008-05-01

    The associations between children's academic reputations among peers and their academic self-concept, effort, and performance were examined in a longitudinal study of 427 students initially enrolled in Grades 3, 4, and 5. Assessments were completed in the fall and spring of 2 consecutive school years and in the fall of a 3rd school year. Peer academic reputation (PAR) correlated moderately strongly with teacher-rated skills and changed over time as a function of grades earned at the prior assessment. Path-analytic models indicated bidirectional associations between PAR and academic self-concept, teacher-rated academic effort, and grade point average. There was little evidence that changes in self-concept mediated the association between PAR and effort and GPA or that changes in effort mediated the association between PAR and GPA. Results suggest that peers may possess unique information about classmates' academic functioning, that children's PARs are psychologically meaningful, and that these reputations may serve as a useful marker of processes that forecast future academic engagement and performance. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  2. COMPARISON OF ANALYTICAL METHODS FOR THE MEASUREMENT OF NON-VIABLE BIOLOGICAL PM

    EPA Science Inventory

    The paper describes a preliminary research effort to develop a methodology for the measurement of non-viable biologically based particulate matter (PM), analyzing for mold, dust mite, and ragweed antigens and endotoxins. Using a comparison of analytical methods, the research obj...

  3. The spatial distribution patterns of condensed phase post-blast explosive residues formed during detonation.

    PubMed

    Abdul-Karim, Nadia; Blackman, Christopher S; Gill, Philip P; Karu, Kersti

    2016-10-05

    The continued usage of explosive devices, as well as the ever growing threat of 'dirty' bombs necessitates a comprehensive understanding of particle dispersal during detonation events in order to develop effectual methods for targeting explosive and/or additive remediation efforts. Herein, the distribution of explosive analytes from controlled detonations of aluminised ammonium nitrate and an RDX-based explosive composition were established by systematically sampling sites positioned around each firing. This is the first experimental study to produce evidence that the post-blast residue mass can distribute according to an approximate inverse-square law model, while also demonstrating for the first time that distribution trends can vary depending on individual analytes. Furthermore, by incorporating blast-wave overpressure measurements, high-speed imaging for fireball volume recordings, and monitoring of environmental conditions, it was determined that the principle factor affecting all analyte dispersals was the wind direction, with other factors affecting specific analytes to varying degrees. The dispersal mechanism for explosive residue is primarily the smoke cloud, a finding which in itself has wider impacts on the environment and fundamental detonation theory. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  4. A hybrid finite element-transfer matrix model for vibroacoustic systems with flat and homogeneous acoustic treatments.

    PubMed

    Alimonti, Luca; Atalla, Noureddine; Berry, Alain; Sgard, Franck

    2015-02-01

    Practical vibroacoustic systems involve passive acoustic treatments consisting of highly dissipative media such as poroelastic materials. The numerical modeling of such systems at low to mid frequencies typically relies on substructuring methodologies based on finite element models. Namely, the master subsystems (i.e., structural and acoustic domains) are described by a finite set of uncoupled modes, whereas condensation procedures are typically preferred for the acoustic treatments. However, although accurate, such methodology is computationally expensive when real life applications are considered. A potential reduction of the computational burden could be obtained by approximating the effect of the acoustic treatment on the master subsystems without introducing physical degrees of freedom. To do that, the treatment has to be assumed homogeneous, flat, and of infinite lateral extent. Under these hypotheses, simple analytical tools like the transfer matrix method can be employed. In this paper, a hybrid finite element-transfer matrix methodology is proposed. The impact of the limiting assumptions inherent within the analytical framework are assessed for the case of plate-cavity systems involving flat and homogeneous acoustic treatments. The results prove that the hybrid model can capture the qualitative behavior of the vibroacoustic system while reducing the computational effort.

  5. On sequential data assimilation for scalar macroscopic traffic flow models

    NASA Astrophysics Data System (ADS)

    Blandin, Sébastien; Couque, Adrien; Bayen, Alexandre; Work, Daniel

    2012-09-01

    We consider the problem of sequential data assimilation for transportation networks using optimal filtering with a scalar macroscopic traffic flow model. Properties of the distribution of the uncertainty on the true state related to the specific nonlinearity and non-differentiability inherent to macroscopic traffic flow models are investigated, derived analytically and analyzed. We show that nonlinear dynamics, by creating discontinuities in the traffic state, affect the performances of classical filters and in particular that the distribution of the uncertainty on the traffic state at shock waves is a mixture distribution. The non-differentiability of traffic dynamics around stationary shock waves is also proved and the resulting optimality loss of the estimates is quantified numerically. The properties of the estimates are explicitly studied for the Godunov scheme (and thus the Cell-Transmission Model), leading to specific conclusions about their use in the context of filtering, which is a significant contribution of this article. Analytical proofs and numerical tests are introduced to support the results presented. A Java implementation of the classical filters used in this work is available on-line at http://traffic.berkeley.edu for facilitating further efforts on this topic and fostering reproducible research.

  6. The use of the analytic hierarchy process to aid decision making in acquired equinovarus deformity.

    PubMed

    van Til, Janine A; Renzenbrink, Gerbert J; Dolan, James G; Ijzerman, Maarten J

    2008-03-01

    To increase the transparency of decision making about treatment in patients with equinovarus deformity poststroke. The analytic hierarchy process (AHP) was used as a structured methodology to study the subjective rationale behind choice of treatment. An 8-hour meeting at a centrally located rehabilitation center in The Netherlands, during which a patient video was shown to all participants (using a personal computer and a large screen) and the patient details were provided on paper. A panel of 10 health professionals from different backgrounds. Not applicable. The performance of the applicable treatments on outcome, impact, comfort, cosmetics, daily effort, and risks and side effects of treatment, as well as the relative importance of criteria in the choice of treatment. According to the model, soft-tissue surgery (.413) ranked first as the preferred treatment, followed by orthopedic footwear (.181), ankle-foot orthosis (.147), surface electrostimulation (.137), and finally implanted electrostimulation (.123). Outcome was the most influential consideration affecting treatment choice (.509), followed by risk and side effects (.194), comfort (.104), daily effort (.098), cosmetics (.065), and impact of treatment (.030). Soft-tissue surgery was judged best on outcome, daily effort, comfortable shoe wear, and cosmetically acceptable result and was thereby preferred as a treatment alternative by the panel in this study. In contrast, orthosis and orthopedic footwear are usually preferred in daily practice. The AHP method was found to be suitable methodology for eliciting subjective opinions and quantitatively comparing treatments in the absence of scientific evidence.

  7. Reverie and metaphor. Some thoughts on how I work as a psychoanalyst.

    PubMed

    Ogden, T

    1997-08-01

    In this paper, the author presents parts of an ongoing internal dialogue concerning how he works as an analyst. He describes the way in which he attempts to sense what is most alive and most real in each analytic encounter, as well as his use of his own reveries in his effort to locate himself in what is going on at an unconscious level in the analytic relationship. The author views each analytic situation as reflecting, to a large degree, a specific type of unconscious intersubjective construction. Since unconscious experience is by definition outside of conscious awareness, the analyst must make use of indirect (associational) methods such as the scrutiny of his own reverie experience in his efforts to 'catch the drift' (Freud, 1923, p. 239) of the unconscious intersubjective constructions being generated. Reveries (and all other derivatives of the unconscious) are viewed not as glimpses into the unconscious, but as metaphorical expressions of what the unconscious experience is like. In the author's experience, when an analysis is 'a going concern', the analytic dialogue often takes the form of a verbal 'squiggle game' (Winnicott, 1971a, p. 3) in which the analytic pair elaborates and modifies the metaphors that the other has unself-consciously introduced. The analytic use of reverie and of the role of metaphor in the analytic experience is clinically illustrated.

  8. Analytical Modeling and Test Correlation of Variable Density Multilayer Insulation for Cryogenic Storage

    NASA Technical Reports Server (NTRS)

    Hastings, L. J.; Hedayat, A.; Brown, T. M.

    2004-01-01

    A unique foam/multilayer insulation (MLI) combination concept for orbital cryogenic storage was experimentally evaluated using a large-scale hydrogen tank. The foam substrate insulates for ground-hold periods and enables a gaseous nitrogen purge as opposed to helium. The MLI, designed for an on-orbit storage period for 45 days, includes several unique features including a variable layer density and larger but fewer perforations for venting during ascent to orbit. Test results with liquid hydrogen indicated that the MLI weight or tank heat leak is reduced by about half in comparison with standard MLI. The focus of this effort is on analytical modeling of the variable density MLI (VD-MLI) on-orbit performance. The foam/VD-MLI model is considered to have five segments. The first segment represents the optional foam layer. The second, third, and fourth segments represent three different MLI layer densities. The last segment is an environmental boundary or shroud that surrounds the last MLI layer. Two approaches are considered: a variable density MLI modeled layer by layer and a semiempirical model or "modified Lockheed equation." Results from the two models were very comparable and were within 5-8 percent of the measured data at the 300 K boundary condition.

  9. Optimal harvesting policy of predator-prey model with free fishing and reserve zones

    NASA Astrophysics Data System (ADS)

    Toaha, Syamsuddin; Rustam

    2017-03-01

    The present paper deals with an optimal harvesting of predator-prey model in an ecosystem that consists of two zones, namely the free fishing and prohibited zones. The dynamics of prey population in the ecosystem can migrate from the free fishing to the prohibited zone and vice versa. The predator and prey populations in the free fishing zone are then harvested with constant efforts. The existence of the interior equilibrium point is analyzed and its stability is determined using Routh-Hurwitz stability test. The stable interior equilibrium point is then related to the problem of maximum profit and the problem of present value of net revenue. We follow the Pontryagin's maximal principle to get the optimal harvesting policy of the present value of the net revenue. From the analysis, we found a critical point of the efforts that makes maximum profit. There also exists certain conditions of the efforts that makes the present value of net revenue becomes maximal. In addition, the interior equilibrium point is locally asymptotically stable which means that the optimal harvesting is reached and the unharvested prey, harvested prey, and harvested predator populations remain sustainable. Numerical examples are given to verify the analytical results.

  10. Historical review of missile aerodynamic developments

    NASA Technical Reports Server (NTRS)

    Spearman, M. Leroy

    1989-01-01

    A comprehensive development history to about 1970 is presented for missile technologies and their associated capabilities and difficulties. Attention is given to the growth of an experimental data base for missile design, as well as to the critical early efforts to develop analytical methods applicable to missiles. Most of the important missile development efforts made during the period from the end of the Second World War to the early 1960s were based primarily on experiences gained through wind tunnel and flight testing; analytical techniques began to demonstrate their usefulness in the design process only in the late 1960s.

  11. Dynamic Impact Testing and Model Development in Support of NASA's Advanced Composites Program

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.; Pereira, J. Michael; Goldberg, Robert; Rassaian, Mostafa

    2018-01-01

    The purpose of this paper is to provide an executive overview of the HEDI effort for NASA's Advanced Composites Program and establish the foundation for the remaining papers to follow in the 2018 SciTech special session NASA ACC High Energy Dynamic Impact. The paper summarizes the work done for the Advanced Composites Program to advance our understanding of the behavior of composite materials during high energy impact events and to advance the ability of analytical tools to provide predictive simulations. The experimental program carried out at GRC is summarized and a status on the current development state for MAT213 will be provided. Future work will be discussed as the HEDI effort transitions from fundamental analysis and testing to investigating sub-component structural concept response to impact events.

  12. Issues in the Pharmacokinetics of Trichloroethylene and Its Metabolites

    PubMed Central

    Chiu, Weihsueh A.; Okino, Miles S.; Lipscomb, John C.; Evans, Marina V.

    2006-01-01

    Much progress has been made in understanding the complex pharmacokinetics of trichloroethylene (TCE). Qualitatively, it is clear that TCE is metabolized to multiple metabolites either locally or into systemic circulation. Many of these metabolites are thought to have toxicologic importance. In addition, efforts to develop physiologically based pharmacokinetic (PBPK) models have led to a better quantitative assessment of the dosimetry of TCE and several of its metabolites. As part of a mini-monograph on key issues in the health risk assessment of TCE, this article is a review of a number of the current scientific issues in TCE pharmacokinetics and recent PBPK modeling efforts with a focus on literature published since 2000. Particular attention is paid to factors affecting PBPK modeling for application to risk assessment. Recent TCE PBPK modeling efforts, coupled with methodologic advances in characterizing uncertainty and variability, suggest that rigorous application of PBPK modeling to TCE risk assessment appears feasible at least for TCE and its major oxidative metabolites trichloroacetic acid and trichloroethanol. However, a number of basic structural hypotheses such as enterohepatic recirculation, plasma binding, and flow- or diffusion-limited treatment of tissue distribution require additional evaluation and analysis. Moreover, there are a number of metabolites of potential toxicologic interest, such as chloral, dichloroacetic acid, and those derived from glutathione conjugation, for which reliable pharmacokinetic data is sparse because of analytical difficulties or low concentrations in systemic circulation. It will be a challenge to develop reliable dosimetry for such cases. PMID:16966104

  13. Final Technical Report - SciDAC Cooperative Agreement: Center for Wave Interactions with Magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnack, Dalton D.

    Final technical report for research performed by Dr. Thomas G. Jenkins in collaboration with Professor Dalton D. Schnack on SciDAC Cooperative Agreement: Center for Wave Interactions with Magnetohydrodyanics, DE-FC02-06ER54899, for the period of 8/15/06 - 8/14/11. This report centers on the Slow MHD physics campaign work performed by Dr. Jenkins while at UW-Madison and then at Tech-X Corporation. To make progress on the problem of RF induced currents affect magnetic island evolution in toroidal plasmas, a set of research approaches are outlined. Three approaches can be addressed in parallel. These are: (1) Analytically prescribed additional term in Ohm's law tomore » model the effect of localized ECCD current drive; (2) Introduce an additional evolution equation for the Ohm's law source term. Establish a RF source 'box' where information from the RF code couples to the fluid evolution; and (3) Carry out a more rigorous analytic calculation treating the additional RF terms in a closure problem. These approaches rely on the necessity of reinvigorating the computation modeling efforts of resistive and neoclassical tearing modes with present day versions of the numerical tools. For the RF community, the relevant action item is - RF ray tracing codes need to be modified so that general three-dimensional spatial information can be obtained. Further, interface efforts between the two codes require work as well as an assessment as to the numerical stability properties of the procedures to be used.« less

  14. (U) An Analytic Examination of Piezoelectric Ejecta Mass Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tregillis, Ian Lee

    2017-02-02

    Ongoing efforts to validate a Richtmyer-Meshkov instability (RMI) based ejecta source model [1, 2, 3] in LANL ASC codes use ejecta areal masses derived from piezoelectric sensor data [4, 5, 6]. However, the standard technique for inferring masses from sensor voltages implicitly assumes instantaneous ejecta creation [7], which is not a feature of the RMI source model. To investigate the impact of this discrepancy, we define separate “areal mass functions” (AMFs) at the source and sensor in terms of typically unknown distribution functions for the ejecta particles, and derive an analytic relationship between them. Then, for the case of single-shockmore » ejection into vacuum, we use the AMFs to compare the analytic (or “true”) accumulated mass at the sensor with the value that would be inferred from piezoelectric voltage measurements. We confirm the inferred mass is correct when creation is instantaneous, and furthermore prove that when creation is not instantaneous, the inferred values will always overestimate the true mass. Finally, we derive an upper bound for the error imposed on a perfect system by the assumption of instantaneous ejecta creation. When applied to shots in the published literature, this bound is frequently less than several percent. Errors exceeding 15% may require velocities or timescales at odds with experimental observations.« less

  15. Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge

    USGS Publications Warehouse

    Jaiswal, K.S.; Wald, D.J.

    2012-01-01

    We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.

  16. Moving from Descriptive to Causal Analytics: Case Study of the Health Indicators Warehouse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, Jack C.; Shankar, Mallikarjun; Xu, Songhua

    The KDD community has described a multitude of methods for knowledge discovery on large datasets. We consider some of these methods and integrate them into an analyst s workflow that proceeds from the data-centric descriptive level to the model-centric causal level. Examples of the workflow are shown for the Health Indicators Warehouse, which is a public database for community health information that is a potent resource for conducting data science on a medium scale. We demonstrate the potential of HIW as a source of serious visual analytics efforts by showing correlation matrix visualizations, multivariate outlier analysis, multiple linear regression ofmore » Medicare costs, and scatterplot matrices for a broad set of health indicators. We conclude by sketching the first steps toward a causal dependence hypothesis.« less

  17. Guiding Early and Often: Using Curricular and Learning Analytics to Shape Teaching, Learning, and Student Success in Gateway Courses

    ERIC Educational Resources Information Center

    Pistilli, Matthew D.; Heileman, Gregory L.

    2017-01-01

    This chapter provides information on how the promise of analytics can be realized in gateway courses through a combination of good data science and the thoughtful application of outcomes to teaching and learning improvement efforts--especially with and among instructors.

  18. Activities in support of the wax-impregnated wallboard concept

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kedl, R.J.; Stovall, T.K.

    1989-01-01

    The concept of octadecane wax impregnated wallboard for the passive solar application is a major thrust of the Oak Ridge National Laboratory (ORNL) Thermal Energy Storage (TES) program. Thus, ORNL has initiated a number of internal efforts in support of this concept. The results of these efforts are: The immersion process for filling wallboard with wax has been successfully sealed up from small samples to full-size sheets; analysis shows that the immersion process has the potential for achieving higher storage capacity than adding wax filled pellets to wallboard during its manufacture; analysis indicates that 75/degree/F is close to an optimummore » phase change temperature for the non-passive solar application; and the thermal conductivity of wallboard without wax has been measured and will be measured for wax impregnated wallboard. In addition, efforts are underway to confirm an analytical model that handles phase change wallboard for the passive solar application. 4 refs., 10 figs.« less

  19. Galaxy Alignments: Theory, Modelling & Simulations

    NASA Astrophysics Data System (ADS)

    Kiessling, Alina; Cacciato, Marcello; Joachimi, Benjamin; Kirk, Donnacha; Kitching, Thomas D.; Leonard, Adrienne; Mandelbaum, Rachel; Schäfer, Björn Malte; Sifón, Cristóbal; Brown, Michael L.; Rassat, Anais

    2015-11-01

    The shapes of galaxies are not randomly oriented on the sky. During the galaxy formation and evolution process, environment has a strong influence, as tidal gravitational fields in the large-scale structure tend to align nearby galaxies. Additionally, events such as galaxy mergers affect the relative alignments of both the shapes and angular momenta of galaxies throughout their history. These "intrinsic galaxy alignments" are known to exist, but are still poorly understood. This review will offer a pedagogical introduction to the current theories that describe intrinsic galaxy alignments, including the apparent difference in intrinsic alignment between early- and late-type galaxies and the latest efforts to model them analytically. It will then describe the ongoing efforts to simulate intrinsic alignments using both N-body and hydrodynamic simulations. Due to the relative youth of this field, there is still much to be done to understand intrinsic galaxy alignments and this review summarises the current state of the field, providing a solid basis for future work.

  20. Spacecraft environmental interactions: A joint Air Force and NASA research and technology program

    NASA Technical Reports Server (NTRS)

    Pike, C. P.; Purvis, C. K.; Hudson, W. R.

    1985-01-01

    A joint Air Force/NASA comprehensive research and technology program on spacecraft environmental interactions to develop technology to control interactions between large spacecraft systems and the charged-particle environment of space is described. This technology will support NASA/Department of Defense operations of the shuttle/IUS, shuttle/Centaur, and the force application and surveillance and detection missions, planning for transatmospheric vehicles and the NASA space station, and the AFSC military space system technology model. The program consists of combined contractual and in-house efforts aimed at understanding spacecraft environmental interaction phenomena and relating results of ground-based tests to space conditions. A concerted effort is being made to identify project-related environmental interactions of concern. The basic properties of materials are being investigated to develop or modify the materials as needed. A group simulation investigation is evaluating basic plasma interaction phenomena to provide inputs to the analytical modeling investigation. Systems performance is being evaluated by both groundbased tests and analysis.

  1. On the evaluation of derivatives of Gaussian integrals

    NASA Technical Reports Server (NTRS)

    Helgaker, Trygve; Taylor, Peter R.

    1992-01-01

    We show that by a suitable change of variables, the derivatives of molecular integrals over Gaussian-type functions required for analytic energy derivatives can be evaluated with significantly less computational effort than current formulations. The reduction in effort increases with the order of differentiation.

  2. Variation in hematologic and serum biochemical values of belugas (Delphinapterus leucas) under managed care.

    PubMed

    Norman, Stephanie A; Beckett, Laurel A; Miller, Woutrina A; St Leger, Judy; Hobbs, Roderick C

    2013-06-01

    Blood analytes are critical for evaluating the general health of cetacean populations, so it is important to understand the intrinsic variability of hematology and serum chemistry values. Previous studies have reported data for follow-up periods of several years in managed and wild populations, but studies over long periods of time (> 20 yr) have not been reported. The study objective was to identify the influences of partitioning characteristics on hematology and serum chemistry analytes of apparently healthy managed beluga (Delphinapterus leucas). Blood values from 31 managed belugas, at three facilities, collected over 22 yr, were assessed for seasonal variation and aging trends, and evaluated for biologic variation among and within individuals. Linear mixed effects models assessed the relationship between the analytes and sex, age, season, facility location, ambient air temperature, and photoperiod. Sex differences in analytes and associations with increasing age were observed. Seasonal variation was observed for hemoglobin, hematocrit, mean corpuscular volume, monocytes, alkaline phosphatase, total bilirubin, cholesterol, and triglycerides. Facilities were associated with larger effects on analyte values compared to other covariates, whereas age, sex, and ambient temperature had smaller effects compared to facility and season. Present findings provide important baseline information for future health monitoring efforts. Interpretation of blood analytes and animal health in managed and wild populations over time is aided by having available typical levels for the species and reference intervals for the degree to which individual animals vary from the species average and from their own baseline levels during long-term monitoring.

  3. Equivalent circuit modeling of a piezo-patch energy harvester on a thin plate with AC-DC conversion

    NASA Astrophysics Data System (ADS)

    Bayik, B.; Aghakhani, A.; Basdogan, I.; Erturk, A.

    2016-05-01

    As an alternative to beam-like structures, piezoelectric patch-based energy harvesters attached to thin plates can be readily integrated to plate-like structures in automotive, marine, and aerospace applications, in order to directly exploit structural vibration modes of the host system without mass loading and volumetric occupancy of cantilever attachments. In this paper, a multi-mode equivalent circuit model of a piezo-patch energy harvester integrated to a thin plate is developed and coupled with a standard AC-DC conversion circuit. Equivalent circuit parameters are obtained in two different ways: (1) from the modal analysis solution of a distributed-parameter analytical model and (2) from the finite-element numerical model of the harvester by accounting for two-way coupling. After the analytical modeling effort, multi-mode equivalent circuit representation of the harvester is obtained via electronic circuit simulation software SPICE. Using the SPICE software, electromechanical response of the piezoelectric energy harvester connected to linear and nonlinear circuit elements are computed. Simulation results are validated for the standard AC-AC and AC-DC configurations. For the AC input-AC output problem, voltage frequency response functions are calculated for various resistive loads, and they show excellent agreement with modal analysis-based analytical closed-form solution and with the finite-element model. For the standard ideal AC input-DC output case, a full-wave rectifier and a smoothing capacitor are added to the harvester circuit for conversion of the AC voltage to a stable DC voltage, which is also validated against an existing solution by treating the single-mode plate dynamics as a single-degree-of-freedom system.

  4. Tidally averaged circulation in Puget Sound sub-basins: Comparison of historical data, analytical model, and numerical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khangaonkar, Tarang; Yang, Zhaoqing; Kim, Tae Yun

    2011-07-20

    Through extensive field data collection and analysis efforts conducted since the 1950s, researchers have established an understanding of the characteristic features of circulation in Puget Sound. The pattern ranges from the classic fjordal behavior in some basins, with shallow brackish outflow and compensating inflow immediately below, to the typical two-layer flow observed in many partially mixed estuaries with saline inflow at depth. An attempt at reproducing this behavior by fitting an analytical formulation to past data is presented, followed by the application of a three-dimensional circulation and transport numerical model. The analytical treatment helped identify key physical processes and parameters,more » but quickly reconfirmed that response is complex and would require site-specific parameterization to include effects of sills and interconnected basins. The numerical model of Puget Sound, developed using unstructured-grid finite volume method, allowed resolution of the sub-basin geometric features, including presence of major islands, and site-specific strong advective vertical mixing created by bathymetry and multiple sills. The model was calibrated using available recent short-term oceanographic time series data sets from different parts of the Puget Sound basin. The results are compared against (1) recent velocity and salinity data collected in Puget Sound from 2006 and (2) a composite data set from previously analyzed historical records, mostly from the 1970s. The results highlight the ability of the model to reproduce velocity and salinity profile characteristics, their variations among Puget Sound subbasins, and tidally averaged circulation. Sensitivity of residual circulation to variations in freshwater inflow and resulting salinity gradient in fjordal sub-basins of Puget Sound is examined.« less

  5. Frequency Response Function Based Damage Identification for Aerospace Structures

    NASA Astrophysics Data System (ADS)

    Oliver, Joseph Acton

    Structural health monitoring technologies continue to be pursued for aerospace structures in the interests of increased safety and, when combined with health prognosis, efficiency in life-cycle management. The current dissertation develops and validates damage identification technology as a critical component for structural health monitoring of aerospace structures and, in particular, composite unmanned aerial vehicles. The primary innovation is a statistical least-squares damage identification algorithm based in concepts of parameter estimation and model update. The algorithm uses frequency response function based residual force vectors derived from distributed vibration measurements to update a structural finite element model through statistically weighted least-squares minimization producing location and quantification of the damage, estimation uncertainty, and an updated model. Advantages compared to other approaches include robust applicability to systems which are heavily damped, large, and noisy, with a relatively low number of distributed measurement points compared to the number of analytical degrees-of-freedom of an associated analytical structural model (e.g., modal finite element model). Motivation, research objectives, and a dissertation summary are discussed in Chapter 1 followed by a literature review in Chapter 2. Chapter 3 gives background theory and the damage identification algorithm derivation followed by a study of fundamental algorithm behavior on a two degree-of-freedom mass-spring system with generalized damping. Chapter 4 investigates the impact of noise then successfully proves the algorithm against competing methods using an analytical eight degree-of-freedom mass-spring system with non-proportional structural damping. Chapter 5 extends use of the algorithm to finite element models, including solutions for numerical issues, approaches for modeling damping approximately in reduced coordinates, and analytical validation using a composite sandwich plate model. Chapter 6 presents the final extension to experimental systems-including methods for initial baseline correlation and data reduction-and validates the algorithm on an experimental composite plate with impact damage. The final chapter deviates from development and validation of the primary algorithm to discuss development of an experimental scaled-wing test bed as part of a collaborative effort for developing structural health monitoring and prognosis technology. The dissertation concludes with an overview of technical conclusions and recommendations for future work.

  6. Thermal fatigue durability for advanced propulsion materials

    NASA Technical Reports Server (NTRS)

    Halford, Gary R.

    1989-01-01

    A review is presented of thermal and thermomechanical fatigue (TMF) crack initiation life prediction and cyclic constitutive modeling efforts sponsored recently by the NASA Lewis Research Center in support of advanced aeronautical propulsion research. A brief description is provided of the more significant material durability models that were created to describe TMF fatigue resistance of both isotropic and anisotropic superalloys, with and without oxidation resistant coatings. The two most significant crack initiation models are the cyclic damage accumulation model and the total strain version of strainrange partitioning. Unified viscoplastic cyclic constitutive models are also described. A troika of industry, university, and government research organizations contributed to the generation of these analytic models. Based upon current capabilities and established requirements, an attempt is made to project which TMF research activities most likely will impact future generation propulsion systems.

  7. Step voltage analysis for the catenoid lightning protection system

    NASA Technical Reports Server (NTRS)

    Chai, J. C.; Briet, R.; Barker, D. L.; Eley, H. E.

    1991-01-01

    The main objective of the proposed overhead Catenoid Lightning Protection System (CLPS) is personnel safety. To ensure working personnel's safety in lightning situations, it is necessary that the potential difference developed across a distance equal to a person's pace (step voltage) does not exceed a separately established safe voltage in order to avoid electrocution (ventricular fibrillation) of humans. Therefore, the first stage of the analytical effort is to calculate the open circuit step voltage. An impedance model is developed for this purpose. It takes into consideration the earth's complex impedance behavior and the transient nature of the lightning phenomenon. In the low frequency limit, this impedance model is shown to reduce to results similar to those predicted by the conventional resistor model in a DC analysis.

  8. Technical Evaluation Report of the Aerospace Medical Panel Working Group WG-08 on Evaluation of Methods to Assess Workload.

    DTIC Science & Technology

    1980-11-01

    Occlusion 3.1 Single Measures 3. Primary Task 3.2 Multiple Measures 3.3 Math Modeling 4.1.1 PFF 4.1.2 CSR 4.1.3 M,0 4.1.4 MW 4.1.5 UG3 4.1.6 ZCP 4.1 Single... modeling methodology; and (4) validation of the analytic/predictive methodology In a system design, development, and test effort." Chapter 9: "A central...2.3 Occlusion P S P S S P -P 3.1 Single Measure-Primary S S S S S S S 3.2 Multiple Measure-Primary S S IS S S S S K 3.3 Math Modeling ~ 4.1.7 Eye and

  9. Dynamic Analyses Including Joints Of Truss Structures

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith

    1991-01-01

    Method for mathematically modeling joints to assess influences of joints on dynamic response of truss structures developed in study. Only structures with low-frequency oscillations considered; only Coulomb friction and viscous damping included in analysis. Focus of effort to obtain finite-element mathematical models of joints exhibiting load-vs.-deflection behavior similar to measured load-vs.-deflection behavior of real joints. Experiments performed to determine stiffness and damping nonlinearities typical of joint hardware. Algorithm for computing coefficients of analytical joint models based on test data developed to enable study of linear and nonlinear effects of joints on global structural response. Besides intended application to large space structures, applications in nonaerospace community include ground-based antennas and earthquake-resistant steel-framed buildings.

  10. The Ophidia Stack: Toward Large Scale, Big Data Analytics Experiments for Climate Change

    NASA Astrophysics Data System (ADS)

    Fiore, S.; Williams, D. N.; D'Anca, A.; Nassisi, P.; Aloisio, G.

    2015-12-01

    The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in multiple domains (e.g. climate change). It provides a "datacube-oriented" framework responsible for atomically processing and manipulating scientific datasets, by providing a common way to run distributive tasks on large set of data fragments (chunks). Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes. The project relies on a strong background on high performance database management and On-Line Analytical Processing (OLAP) systems to manage large scientific datasets. The Ophidia analytics platform provides several data operators to manipulate datacubes (about 50), and array-based primitives (more than 100) to perform data analysis on large scientific data arrays. To address interoperability, Ophidia provides multiple server interfaces (e.g. OGC-WPS). From a client standpoint, a Python interface enables the exploitation of the framework into Python-based eco-systems/applications (e.g. IPython) and the straightforward adoption of a strong set of related libraries (e.g. SciPy, NumPy). The talk will highlight a key feature of the Ophidia framework stack: the "Analytics Workflow Management System" (AWfMS). The Ophidia AWfMS coordinates, orchestrates, optimises and monitors the execution of multiple scientific data analytics and visualization tasks, thus supporting "complex analytics experiments". Some real use cases related to the CMIP5 experiment will be discussed. In particular, with regard to the "Climate models intercomparison data analysis" case study proposed in the EU H2020 INDIGO-DataCloud project, workflows related to (i) anomalies, (ii) trend, and (iii) climate change signal analysis will be presented. Such workflows will be distributed across multiple sites - according to the datasets distribution - and will include intercomparison, ensemble, and outlier analysis. The two-level workflow solution envisioned in INDIGO (coarse grain for distributed tasks orchestration, and fine grain, at the level of a single data analytics cluster instance) will be presented and discussed.

  11. Wildland Fire Prevention: Today, Intuition--Tomorrow, Management

    Treesearch

    Albert J. Simard; Linda R. Donoghue

    1987-01-01

    Describes, from a historical perspective, methods used to characterize fire prevention problems and evaluate prevention programs and discusses past research efforts to bolster these analytical and management efforts. Highlights research on the sociological perspectives of the wildfire problem and on quantitative fire occurrence prediction and program evaluation systems...

  12. Structural Dynamics Modeling of HIRENASD in Support of the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Wieseman, Carol; Chwalowski, Pawel; Heeg, Jennifer; Boucke, Alexander; Castro, Jack

    2013-01-01

    An Aeroelastic Prediction Workshop (AePW) was held in April 2012 using three aeroelasticity case study wind tunnel tests for assessing the capabilities of various codes in making aeroelasticity predictions. One of these case studies was known as the HIRENASD model that was tested in the European Transonic Wind Tunnel (ETW). This paper summarizes the development of a standardized enhanced analytical HIRENASD structural model for use in the AePW effort. The modifications to the HIRENASD finite element model were validated by comparing modal frequencies, evaluating modal assurance criteria, comparing leading edge, trailing edge and twist of the wing with experiment and by performing steady and unsteady CFD analyses for one of the test conditions on the same grid, and identical processing of results.

  13. Analytical performance of 17 general chemistry analytes across countries and across manufacturers in the INPUtS project of EQA organizers in Italy, the Netherlands, Portugal, United Kingdom and Spain.

    PubMed

    Weykamp, Cas; Secchiero, Sandra; Plebani, Mario; Thelen, Marc; Cobbaert, Christa; Thomas, Annette; Jassam, Nuthar; Barth, Julian H; Perich, Carmen; Ricós, Carmen; Faria, Ana Paula

    2017-02-01

    Optimum patient care in relation to laboratory medicine is achieved when results of laboratory tests are equivalent, irrespective of the analytical platform used or the country where the laboratory is located. Standardization and harmonization minimize differences and the success of efforts to achieve this can be monitored with international category 1 external quality assessment (EQA) programs. An EQA project with commutable samples, targeted with reference measurement procedures (RMPs) was organized by EQA institutes in Italy, the Netherlands, Portugal, UK, and Spain. Results of 17 general chemistry analytes were evaluated across countries and across manufacturers according to performance specifications derived from biological variation (BV). For K, uric acid, glucose, cholesterol and high-density density (HDL) cholesterol, the minimum performance specification was met in all countries and by all manufacturers. For Na, Cl, and Ca, the minimum performance specifications were met by none of the countries and manufacturers. For enzymes, the situation was complicated, as standardization of results of enzymes toward RMPs was still not achieved in 20% of the laboratories and questionable in the remaining 80%. The overall performance of the measurement of 17 general chemistry analytes in European medical laboratories met the minimum performance specifications. In this general picture, there were no significant differences per country and no significant differences per manufacturer. There were major differences between the analytes. There were six analytes for which the minimum quality specifications were not met and manufacturers should improve their performance for these analytes. Standardization of results of enzymes requires ongoing efforts.

  14. Analytical solutions for efficient interpretation of single-well push-pull tracer tests

    NASA Astrophysics Data System (ADS)

    Huang, Junqi; Christ, John A.; Goltz, Mark N.

    2010-08-01

    Single-well push-pull tracer tests have been used to characterize the extent, fate, and transport of subsurface contamination. Analytical solutions provide one alternative for interpreting test results. In this work, an exact analytical solution to two-dimensional equations describing the governing processes acting on a dissolved compound during a modified push-pull test (advection, longitudinal and transverse dispersion, first-order decay, and rate-limited sorption/partitioning in steady, divergent, and convergent flow fields) is developed. The coupling of this solution with inverse modeling to estimate aquifer parameters provides an efficient methodology for subsurface characterization. Synthetic data for single-well push-pull tests are employed to demonstrate the utility of the solution for determining (1) estimates of aquifer longitudinal and transverse dispersivities, (2) sorption distribution coefficients and rate constants, and (3) non-aqueous phase liquid (NAPL) saturations. Employment of the solution to estimate NAPL saturations based on partitioning and non-partitioning tracers is designed to overcome limitations of previous efforts by including rate-limited mass transfer. This solution provides a new tool for use by practitioners when interpreting single-well push-pull test results.

  15. Comparison and validation of point spread models for imaging in natural waters.

    PubMed

    Hou, Weilin; Gray, Deric J; Weidemann, Alan D; Arnone, Robert A

    2008-06-23

    It is known that scattering by particulates within natural waters is the main cause of the blur in underwater images. Underwater images can be better restored or enhanced with knowledge of the point spread function (PSF) of the water. This will extend the performance range as well as the information retrieval from underwater electro-optical systems, which is critical in many civilian and military applications, including target and especially mine detection, search and rescue, and diver visibility. A better understanding of the physical process involved also helps to predict system performance and simulate it accurately on demand. The presented effort first reviews several PSF models, including the introduction of a semi-analytical PSF given optical properties of the medium, including scattering albedo, mean scattering angles and the optical range. The models under comparison include the empirical model of Duntley, a modified PSF model by Dolin et al, as well as the numerical integration of analytical forms from Wells, as a benchmark of theoretical results. For experimental results, in addition to that of Duntley, we validate the above models with measured point spread functions by applying field measured scattering properties with Monte Carlo simulations. Results from these comparisons suggest it is sufficient but necessary to have the three parameters listed above to model PSFs. The simplified approach introduced also provides adequate accuracy and flexibility for imaging applications, as shown by examples of restored underwater images.

  16. How do challenges increase customer loyalty to online games?

    PubMed

    Teng, Ching-I

    2013-12-01

    Despite the design of various challenge levels in online games, exactly how these challenges increase customer loyalty to online games has seldom been examined. This study investigates how such challenges increase customer loyalty to online games. The study sample comprises 2,861 online gamers. Structural equation modeling is performed. Analytical results indicate that the relationship between challenge and loyalty intensifies when customers perceive that overcoming challenges takes a long time. Results of this study contribute to efforts to determine how challenges and challenge-related perceptions impact customer loyalty to online games.

  17. Multiloop Manual Control of Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Mcnally, B. D.

    1984-01-01

    Human interaction with a simple, multiloop dynamic system in which the human's activity was systematically varied by changing the levels of automation was studied. The control loop structure resulting from the task definition parallels that for any multiloop manual control system, is considered a sterotype. Simple models of the human in the task, and upon extending a technique for describing the manner in which the human subjectively quantifies his opinion of task difficulty were developed. A man in the loop simulation which provides data to support and direct the analytical effort is presented.

  18. Does Couple and Relationship Education Work for Individuals in Stepfamilies? A Meta-Analytic Study

    ERIC Educational Resources Information Center

    Lucier-Greer, Mallory; Adler-Baeder, Francesca

    2012-01-01

    Recent meta-analytic efforts have documented how couple and relationship education (CRE) programs promote healthy relationship and family functioning. The current meta-analysis contributes to this body of literature by examining stepfamily couples, an at-risk, subpopulation of participants, and assessing the effectiveness of CRE programs for…

  19. Development of a robust space power system decision model

    NASA Astrophysics Data System (ADS)

    Chew, Gilbert; Pelaccio, Dennis G.; Jacobs, Mark; Stancati, Michael; Cataldo, Robert

    2001-02-01

    NASA continues to evaluate power systems to support human exploration of the Moon and Mars. The system(s) would address all power needs of surface bases and on-board power for space transfer vehicles. Prior studies have examined both solar and nuclear-based alternatives with respect to individual issues such as sizing or cost. What has not been addressed is a comprehensive look at the risks and benefits of the options that could serve as the analytical framework to support a system choice that best serves the needs of the exploration program. This paper describes the SAIC developed Space Power System Decision Model, which uses a formal Two-step Analytical Hierarchy Process (TAHP) methodology that is used in the decision-making process to clearly distinguish candidate power systems in terms of benefits, safety, and risk. TAHP is a decision making process based on the Analytical Hierarchy Process, which employs a hierarchic approach of structuring decision factors by weights, and relatively ranks system design options on a consistent basis. This decision process also includes a level of data gathering and organization that produces a consistent, well-documented assessment, from which the capability of each power system option to meet top-level goals can be prioritized. The model defined on this effort focuses on the comparative assessment candidate power system options for Mars surface application(s). This paper describes the principles of this approach, the assessment criteria and weighting procedures, and the tools to capture and assess the expert knowledge associated with space power system evaluation. .

  20. Monitoring Affect States during Effortful Problem Solving Activities

    ERIC Educational Resources Information Center

    D'Mello, Sidney K.; Lehman, Blair; Person, Natalie

    2010-01-01

    We explored the affective states that students experienced during effortful problem solving activities. We conducted a study where 41 students solved difficult analytical reasoning problems from the Law School Admission Test. Students viewed videos of their faces and screen captures and judged their emotions from a set of 14 states (basic…

  1. Thermal and Fluid Modeling of the CRYogenic Orbital TEstbed (CRYOTE) Ground Test Article (GTA)

    NASA Technical Reports Server (NTRS)

    Piryk, David; Schallhorn, Paul; Walls, Laurie; Stopnitzky, Benny; Rhys, Noah; Wollen, Mark

    2012-01-01

    The purpose of this study was to anchor thermal and fluid system models to data acquired from a ground test article (GTA) for the CRYogenic Orbital TEstbed - CRYOTE. To accomplish this analysis, it was broken into four primary tasks. These included model development, pre-test predictions, testing support at Marshall Space Flight Center (MSFC} and post-test correlations. Information from MSFC facilitated the task of refining and correlating the initial models. The primary goal of the modeling/testing/correlating efforts was to characterize heat loads throughout the ground test article. Significant factors impacting the heat loads included radiative environments, multi-layer insulation (MLI) performance, tank fill levels, tank pressures, and even contact conductance coefficients. This paper demonstrates how analytical thermal/fluid networks were established, and it includes supporting rationale for specific thermal responses seen during testing.

  2. Interpretation of F106B and CV580 in-flight lightning data and form factor determination

    NASA Technical Reports Server (NTRS)

    Rudolph, T.; Horembala, J.; Eriksen, F. J.; Weigel, H. S.; Elliott, J. R.; Parker, S. L.; Perala, R. A.

    1989-01-01

    Two topics of in-flight aircraft/lightning interaction are addressed. The first is the analysis of measured data from the NASA F106B Thunderstorm Research Aircraft and the CV580 research program run by the FAA and Wright-Patterson Air Force Base. The CV580 data was investigated in a mostly qualitative sense, while the F106B data was subjected to both statistical and quantitative analysis using linear triggered lightning finite difference models. The second main topic is the analysis of field mill data and the calibration of the field mill systems. The calibration of the F106B field mill system was investigated using an improved finite difference model of the aircraft having a spatial resolution of one-quarter meter. The calibration was applied to measured field mill data acquired during the 1985 thunderstorm season. The experimental determination of form factors useful for field mill calibration was also investigated both experimentally and analytically. The experimental effort involved the use of conducting scale models and an electrolytic tank. An analytic technique was developed to aid in the understanding of the experimental results.

  3. Fast and Analytical EAP Approximation from a 4th-Order Tensor.

    PubMed

    Ghosh, Aurobrata; Deriche, Rachid

    2012-01-01

    Generalized diffusion tensor imaging (GDTI) was developed to model complex apparent diffusivity coefficient (ADC) using higher-order tensors (HOTs) and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP). Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF), since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data.

  4. Asymptotic theory of time-varying social networks with heterogeneous activity and tie allocation.

    PubMed

    Ubaldi, Enrico; Perra, Nicola; Karsai, Márton; Vezzani, Alessandro; Burioni, Raffaella; Vespignani, Alessandro

    2016-10-24

    The dynamic of social networks is driven by the interplay between diverse mechanisms that still challenge our theoretical and modelling efforts. Amongst them, two are known to play a central role in shaping the networks evolution, namely the heterogeneous propensity of individuals to i) be socially active and ii) establish a new social relationships with their alters. Here, we empirically characterise these two mechanisms in seven real networks describing temporal human interactions in three different settings: scientific collaborations, Twitter mentions, and mobile phone calls. We find that the individuals' social activity and their strategy in choosing ties where to allocate their social interactions can be quantitatively described and encoded in a simple stochastic network modelling framework. The Master Equation of the model can be solved in the asymptotic limit. The analytical solutions provide an explicit description of both the system dynamic and the dynamical scaling laws characterising crucial aspects about the evolution of the networks. The analytical predictions match with accuracy the empirical observations, thus validating the theoretical approach. Our results provide a rigorous dynamical system framework that can be extended to include other processes shaping social dynamics and to generate data driven predictions for the asymptotic behaviour of social networks.

  5. Fast and Analytical EAP Approximation from a 4th-Order Tensor

    PubMed Central

    Ghosh, Aurobrata; Deriche, Rachid

    2012-01-01

    Generalized diffusion tensor imaging (GDTI) was developed to model complex apparent diffusivity coefficient (ADC) using higher-order tensors (HOTs) and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP). Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF), since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data. PMID:23365552

  6. Asymptotic theory of time-varying social networks with heterogeneous activity and tie allocation

    NASA Astrophysics Data System (ADS)

    Ubaldi, Enrico; Perra, Nicola; Karsai, Márton; Vezzani, Alessandro; Burioni, Raffaella; Vespignani, Alessandro

    2016-10-01

    The dynamic of social networks is driven by the interplay between diverse mechanisms that still challenge our theoretical and modelling efforts. Amongst them, two are known to play a central role in shaping the networks evolution, namely the heterogeneous propensity of individuals to i) be socially active and ii) establish a new social relationships with their alters. Here, we empirically characterise these two mechanisms in seven real networks describing temporal human interactions in three different settings: scientific collaborations, Twitter mentions, and mobile phone calls. We find that the individuals’ social activity and their strategy in choosing ties where to allocate their social interactions can be quantitatively described and encoded in a simple stochastic network modelling framework. The Master Equation of the model can be solved in the asymptotic limit. The analytical solutions provide an explicit description of both the system dynamic and the dynamical scaling laws characterising crucial aspects about the evolution of the networks. The analytical predictions match with accuracy the empirical observations, thus validating the theoretical approach. Our results provide a rigorous dynamical system framework that can be extended to include other processes shaping social dynamics and to generate data driven predictions for the asymptotic behaviour of social networks.

  7. Modeling nuclear processes by Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rashid, Nahrul Khair Alang Md, E-mail: nahrul@iium.edu.my

    2015-04-29

    Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox softwaremore » that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples.« less

  8. The European computer model for optronic system performance prediction (ECOMOS)

    NASA Astrophysics Data System (ADS)

    Repasi, Endre; Bijl, Piet; Labarre, Luc; Wittenstein, Wolfgang; Bürsing, Helge

    2017-05-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defence and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses and combines well-accepted existing European tools to build up a strong competitive position. This includes two TA models: the analytical TRM4 model and the image-based TOD model. In addition, it uses the atmosphere model MATISSE. In this paper, the central idea of ECOMOS is exposed. The overall software structure and the underlying models are shown and elucidated. The status of the project development is given as well as a short outlook on validation tests and the future potential of simulation for sensor assessment.

  9. A Hybrid Analytical/Numerical Model for the Characterization of Preferential Flow Path with Non-Darcy Flow

    PubMed Central

    Wang, Sen; Feng, Qihong; Han, Xiaodong

    2013-01-01

    Due to the long-term fluid-solid interactions in waterflooding, the tremendous variation of oil reservoir formation parameters will lead to the widespread evolution of preferential flow paths, thereby preventing the further enhancement of recovery efficiency because of unstable fingering and premature breakthrough. To improve oil recovery, the characterization of preferential flow paths is essential and imperative. In efforts that have been previously documented, fluid flow characteristics within preferential paths are assumed to obey Darcy's equation. However, the occurrence of non-Darcy flow behavior has been increasingly suggested. To examine this conjecture, the Forchheimer number with the inertial coefficient estimated from different empirical formulas is applied as the criterion. Considering a 10% non-Darcy effect, the fluid flow in a preferential path may do experience non-Darcy behavior. With the objective of characterizing the preferential path with non-Darcy flow, a hybrid analytical/numerical model has been developed to investigate the pressure transient response, which dynamically couples a numerical model describing the non-Darcy effect of a preferential flow path with an analytical reservoir model. The characteristics of the pressure transient behavior and the sensitivities of corresponding parameters have also been discussed. In addition, an interpretation approach for pressure transient testing is also proposed, in which the Gravitational Search Algorithm is employed as a non-linear regression technology to match measured pressure with this hybrid model. Examples of applications from different oilfields are also presented to illustrate this method. This cost-effective approach provides more accurate characterization of a preferential flow path with non-Darcy flow, which will lay a solid foundation for the design and operation of conformance control treatments, as well as several other Enhanced Oil Recovery projects. PMID:24386224

  10. Modeling quasi-static poroelastic propagation using an asymptotic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, D.W.

    2007-11-01

    Since the formulation of poroelasticity (Biot(1941)) and its reformulation (Rice & Cleary(1976)), there have been many efforts to solve the coupled system of equations. Perhaps because of the complexity of the governing equations, most of the work has been directed towards finding numerical solutions. For example, Lewis and co-workers published early papers (Lewis & Schrefler(1978); Lewis et al.(1991)Lewis, Schrefler, & Simoni) concerned with finite-element methods for computing consolidation, subsidence, and examining the importance of coupling. Other early work dealt with flow in a deformable fractured medium (Narasimhan & Witherspoon 1976); Noorishad et al.(1984)Noorishad, Tsang, & Witherspoon. This effort eventually evolvedmore » into a general numerical approach for modeling fluid flow and deformation (Rutqvist et al.(2002)Rutqvist, Wu, Tsang, & Bodvarsson). As a result of this and other work, numerous coupled, computer-based algorithms have emerged, typically falling into one of three categories: one-way coupling, loose coupling, and full coupling (Minkoff et al.(2003)Minkoff, Stone, Bryant, Peszynska, & Wheeler). In one-way coupling the fluid flow is modeled using a conventional numerical simulator and the resulting change in fluid pressures simply drives the deformation. In loosely coupled modeling distinct geomechanical and fluid flow simulators are run for a sequence of time steps and at the conclusion of each step information is passed between the simulators. In full coupling, the fluid flow and geomechanics equations are solved simultaneously at each time step (Lewis & Sukirman(1993); Lewis & Ghafouri(1997); Gutierrez & Lewis(2002)). One disadvantage of a purely numerical approach to solving the governing equations of poroelasticity is that it is not clear how the various parameters interact and influence the solution. Analytic solutions have an advantage in that respect; the relationship between the medium and fluid properties is clear from the form of the solution. Unfortunately, analytic solutions are only available for highly idealized conditions, such as a uniform (Rudnicki(1986)) or one-dimensional (Simon et al.(1984)Simon, Zienkiewicz, & Paul; Gajo & Mongiovi(1995); Wang & Kumpel(2003)) medium. In this paper I derive an asymptotic, semi-analytic solution for coupled deformation and flow. The approach is similar to trajectory- or ray-based methods used to model elastic and electromagnetic wave propagation (Aki & Richards(1980); Kline & Kay(1979); Kravtsov & Orlov(1990); Keller & Lewis(1995)) and, more recently, diffusive propagation (Virieux et al.(1994)Virieux, Flores-Luna, & Gibert; Vasco et al.(2000)Vasco, Karasaki, & Keers; Shapiro et al.(2002)Shapiro, Rothert, Rath, & Rindschwentner; Vasco(2007)). The asymptotic solution is valid in the presence of smoothly-varying, heterogeneous flow properties. The situation I am modeling is that of a formation with heterogeneous flow properties and uniform mechanical properties. The boundaries of the layer may vary arbitrary and can define discontinuities in both flow and mechanical properties. Thus, using the techniques presented here, it is possible to model a stack of irregular layers with differing mechanical properties. Within each layer the hydraulic conductivity and porosity can vary smoothly but with an arbitrarily large magnitude. The advantages of this approach are that it produces explicit, semi-analytic expressions for the arrival time and amplitude of the Biot slow and fast waves, expressions which are valid in a medium with heterogeneous properties. As shown here, the semi-analytic expressions provide insight into the nature of pressure and deformation signals recorded at an observation point. Finally, the technique requires considerably fewer computer resources than does a fully numerical treatment.« less

  11. Psychosocial work environment and mental health--a meta-analytic review.

    PubMed

    Stansfeld, Stephen; Candy, Bridget

    2006-12-01

    To clarify the associations between psychosocial work stressors and mental ill health, a meta-analysis of psychosocial work stressors and common mental disorders was undertaken using longitudinal studies identified through a systematic literature review. The review used a standardized search strategy and strict inclusion and quality criteria in seven databases in 1994-2005. Papers were identified from 24,939 citations covering social determinants of health, 50 relevant papers were identified, 38 fulfilled inclusion criteria, and 11 were suitable for a meta-analysis. The Comprehensive Meta-analysis Programme was used for decision authority, decision latitude, psychological demands, and work social support, components of the job-strain and iso-strain models, and the combination of effort and reward that makes up the effort-reward imbalance model and job insecurity. Cochran's Q statistic assessed the heterogeneity of the results, and the I2 statistic determined any inconsistency between studies. Job strain, low decision latitude, low social support, high psychological demands, effort-reward imbalance, and high job insecurity predicted common mental disorders despite the heterogeneity for psychological demands and social support among men. The strongest effects were found for job strain and effort-reward imbalance. This meta-analysis provides robust consistent evidence that (combinations of) high demands and low decision latitude and (combinations of) high efforts and low rewards are prospective risk factors for common mental disorders and suggests that the psychosocial work environment is important for mental health. The associations are not merely explained by response bias. The impact of work stressors on common mental disorders differs for women and men.

  12. Gear noise, vibration, and diagnostic studies at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Zakrajsek, James J.; Oswald, Fred B.; Townsend, Dennis P.; Coy, John J.

    1990-01-01

    The NASA Lewis Research Center and the U.S. Army Aviation Systems Command are involved in a joint research program to advance the technology of rotorcraft transmissions. This program consists of analytical as well as experimental efforts to achieve the overall goals of reducing weight, noise, and vibration, while increasing life and reliability. Recent analytical activities are highlighted in the areas of gear noise, vibration, and diagnostics performed in-house and through NASA and U.S. Army sponsored grants and contracts. These activities include studies of gear tooth profiles to reduce transmission error and vibration as well as gear housing and rotordynamic modeling to reduce structural vibration transmission and noise radiation, and basic research into current gear failure diagnostic methodologies. Results of these activities are presented along with an overview of near term research plans in the gear noise, vibration, and diagnostics area.

  13. Analytical solutions for coagulation and condensation kinetics of composite particles

    NASA Astrophysics Data System (ADS)

    Piskunov, Vladimir N.

    2013-04-01

    The processes of composite particles formation consisting of a mixture of different materials are essential for many practical problems: for analysis of the consequences of accidental releases in atmosphere; for simulation of precipitation formation in clouds; for description of multi-phase processes in chemical reactors and industrial facilities. Computer codes developed for numerical simulation of these processes require optimization of computational methods and verification of numerical programs. Kinetic equations of composite particle formation are given in this work in a concise form (impurity integrated). Coagulation, condensation and external sources associated with nucleation are taken into account. Analytical solutions were obtained in a number of model cases. The general laws for fraction redistribution of impurities were defined. The results can be applied to develop numerical algorithms considerably reducing the simulation effort, as well as to verify the numerical programs for calculation of the formation kinetics of composite particles in the problems of practical importance.

  14. Ion sampling and transport in Inductively Coupled Plasma Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Farnsworth, Paul B.; Spencer, Ross L.

    2017-08-01

    Quantitative accuracy and high sensitivity in inductively coupled plasma mass spectrometry (ICP-MS) depend on consistent and efficient extraction and transport of analyte ions from an inductively coupled plasma to a mass analyzer, where they are sorted and detected. In this review we examine the fundamental physical processes that control ion sampling and transport in ICP-MS and compare the results of theory and computerized models with experimental efforts to characterize the flow of ions through plasma mass spectrometers' vacuum interfaces. We trace the flow of ions from their generation in the plasma, into the sampling cone, through the supersonic expansion in the first vacuum stage, through the skimmer, and into the ion optics that deliver the ions to the mass analyzer. At each stage we consider idealized behavior and departures from ideal behavior that affect the performance of ICP-MS as an analytical tool.

  15. On-line application of near-infrared spectroscopy for monitoring water levels in parts per million in a manufacturing-scale distillation process.

    PubMed

    Lambertus, Gordon; Shi, Zhenqi; Forbes, Robert; Kramer, Timothy T; Doherty, Steven; Hermiller, James; Scully, Norma; Wong, Sze Wing; LaPack, Mark

    2014-01-01

    An on-line analytical method based on transmission near-infrared spectroscopy (NIRS) for the quantitative determination of water concentrations (in parts per million) was developed and applied to the manufacture of a pharmaceutical intermediate. Calibration models for water analysis, built at the development site and applied at the manufacturing site, were successfully demonstrated during six manufacturing runs at a 250-gallon scale. The water measurements will be used as a forward-processing control point following distillation of a toluene product solution prior to use in a Grignard reaction. The most significant impact of using this NIRS-based process analytical technology (PAT) to replace off-line measurements is the significant reduction in the risk of operator exposure through the elimination of sampling of a severely lachrymatory and mutagenic compound. The work described in this report illustrates the development effort from proof-of-concept phase to manufacturing implementation.

  16. Some studies related to a new Hexagonal Compound Parabolic Concentrator (HCPC) as a secondary in tandem with a solar tower

    NASA Astrophysics Data System (ADS)

    Suresh, Deivarajan

    Secondary concentrators operate in the focal plane of a point focusing system such as a paraboloidal dish or a tower and, when properly designed, are capable of enhancing the overall concentration ratio of the optical system at least by factor of two to five. The viability of using different shapes was demonstrated both analytically as well as experimentally in recent years, including Compound Parabolic Concentrators (CPCs) of circular cross section and 'trumpets' as secondaries. Current research effort is centered around a HCPC (Hexagonal CPC). Major areas addressed include an overview on the state of development of secondary concentrators, some background information related to the design of a HCPC, the results of an analytical study on the thermal behavior of this HCPC under concentrated flux conditions, and a computer modeling for assessing the possible thermal interactions between the secondary and a high temperature receiver.

  17. Two-dimensional dynamic stall as simulated in a varying freestream

    NASA Technical Reports Server (NTRS)

    Pierce, G. A.; Kunz, D. L.; Malone, J. B.

    1978-01-01

    A low speed wind tunnel equipped with a axial gust generator to simulate the aerodynamic environment of a helicopter rotor was used to study the dynamic stall of a pitching blade in an effort to ascertain to what extent harmonic velocity perturbations in the freestream affect dynamic stall. The aerodynamic moment on a two dimensional, pitching blade model in both constant and pulsating airstream was measured. An operational analog computer was used to perform on-line data reduction and plots of moment versus angle of attack and work done by the moment were obtained. The data taken in the varying freestream were then compared to constant freestream data and to the results of two analytical methods. These comparisons show that the velocity perturbations have a significant effect on the pitching moment which can not be consistently predicted by the analytical methods, but had no drastic effect on the blade stability.

  18. Learning the Landscape: Implementation Challenges of Primary Care Innovators around Cancer Survivorship Care

    PubMed Central

    O’Malley, Denalee; Hudson, Shawna V.; Nekhlyudov, Larissa; Howard, Jenna; Rubinstein, Ellen; Lee, Heather S.; Overholser, Linda S.; Shaw, Amy; Givens, Sarah; Burton, Jay S.; Grunfeld, Eva; Parry, Carly; Crabtree, Benjamin F.

    2016-01-01

    PURPOSE This study describes the experiences of early implementers of primary care-focused cancer survivorship delivery models. METHODS Snowball sampling was used to identify innovators. Twelve participants (five cancer survivorship primary care innovators and seven content experts) attended a working conference focused on cancer survivorship population strategies and primary care transformation. Data included meeting discussion transcripts/field notes, transcribed in-depth innovator interviews, and innovators’ summaries of care models. We used a multi-step immersion/crystallization analytic approach, guided by a primary care organizational change model. RESULTS Innovative practice models included: 1) a consultative model in a primary care setting; 2) a primary care physician (PCP)-led, blended consultative/panel-based model in an oncology setting; 3) an oncology nurse navigator in a primary care practice; and 4) two sub-specialty models where PCPs in a general medical practice dedicated part of their patient panel to cancer survivors. Implementation challenges included: (1) lack of key stakeholder buy-in; (2) practice resources allocated to competing (non-survivorship) change efforts; and (3) competition with higher priority initiatives incentivized by payers. CONCLUSIONS Cancer survivorship delivery models are potentially feasible in primary care; however, significant barriers to widespread implementation exist. Implementation efforts would benefit from increasing the awareness and potential value-add of primary care-focused strategies to address survivors’ needs. PMID:27277895

  19. CRYogenic Orbital TEstbed Ground Test Article Thermal Analysis

    NASA Technical Reports Server (NTRS)

    Piryk, David; Schallhorn, Paul; Walls, Laurie; Stopnitzky, Benny; Rhys, Noah; Wollen, Mark

    2012-01-01

    The purpose of this study was to anchor thermal and fluid system models to CRYOTE ground test data. The CRYOTE ground test artide was jointly developed by Innovative Engineering Solutions, United Launch Alliance and NASA KSC. The test article was constructed out of a titanium alloy tank, Sapphire 77 composite skin (similar to G10), an external secondary payload adapter ring, thermal vent system, multi layer insulation and various data acquisition instrumentation. In efforts to understand heat loads throughout this system, the GTA (filled with liquid nitrogen for safety purposes) was subjected to a series of tests in a vacuum chamber at Marshall Space Flight Center. By anchoring analytical models against test data, higher fidelity thermal environment predictions can be made for future flight articles which would eventually demonstrate critical cryogenic fluid management technologies such as system chilldown, transfer, pressure control and long term storage. Significant factors that influenced heat loads included radiative environments, multi-layer insulation performance, tank fill levels and pressures and even contact conductance coefficients. This report demonstrates how analytical thermal/fluid networks were established and includes supporting rationale for specific thermal responses.

  20. Cumulative biological impacts framework for solar energy projects in the California Desert

    USGS Publications Warehouse

    Davis, Frank W.; Kreitler, Jason R.; Soong, Oliver; Stoms, David M.; Dashiell, Stephanie; Hannah, Lee; Wilkinson, Whitney; Dingman, John

    2013-01-01

    This project developed analytical approaches, tools and geospatial data to support conservation planning for renewable energy development in the California deserts. Research focused on geographical analysis to avoid, minimize and mitigate the cumulative biological effects of utility-scale solar energy development. A hierarchical logic model was created to map the compatibility of new solar energy projects with current biological conservation values. The research indicated that the extent of compatible areas is much greater than the estimated land area required to achieve 2040 greenhouse gas reduction goals. Species distribution models were produced for 65 animal and plant species that were of potential conservation significance to the Desert Renewable Energy Conservation Plan process. These models mapped historical and projected future habitat suitability using 270 meter resolution climate grids. The results were integrated into analytical frameworks to locate potential sites for offsetting project impacts and evaluating the cumulative effects of multiple solar energy projects. Examples applying these frameworks in the Western Mojave Desert ecoregion show the potential of these publicly-available tools to assist regional planning efforts. Results also highlight the necessity to explicitly consider projected land use change and climate change when prioritizing areas for conservation and mitigation offsets. Project data, software and model results are all available online.

  1. Estimates of effects of residual acceleration on USML-1 experiments

    NASA Technical Reports Server (NTRS)

    Naumann, Robert J.

    1995-01-01

    The purpose of this study effort was to develop analytical models to describe the effects of residual accelerations on the experiments to be carried on the first U.S. Microgravity Lab mission (USML-1) and to test the accuracy of these models by comparing the pre-flight predicted effects with the post-flight measured effects. After surveying the experiments to be performed on USML-1, it became evident that the anticipated residual accelerations during the USML-1 mission were well below the threshold for most of the primary experiments and all of the secondary (Glovebox) experiments and that the only set of experiments that could provide quantifiable effects, and thus provide a definitive test of the analytical models, were the three melt growth experiments using the Bridgman-Stockbarger type Crystal Growth Furnace (CGF). This class of experiments is by far the most sensitive to low level quasi-steady accelerations that are unavoidable on space craft operating in low earth orbit. Because of this, they have been the drivers for the acceleration requirements imposed on the Space Station. Therefore, it is appropriate that the models on which these requirements are based are tested experimentally. Also, since solidification proceeds directionally over a long period of time, the solidified ingot provides a more or less continuous record of the effects from acceleration disturbances.

  2. TOPEX Microwave Radiometer - Thermal design verification test and analytical model validation

    NASA Technical Reports Server (NTRS)

    Lin, Edward I.

    1992-01-01

    The testing of the TOPEX Microwave Radiometer (TMR) is described in terms of hardware development based on the modeling and thermal vacuum testing conducted. The TMR and the vacuum-test facility are described, and the thermal verification test includes a hot steady-state segment, a cold steady-state segment, and a cold survival mode segment totalling 65 hours. A graphic description is given of the test history which is related temperature tracking, and two multinode TMR test-chamber models are compared to the test results. Large discrepancies between the test data and the model predictions are attributed to contact conductance, effective emittance from the multilayer insulation, and heat leaks related to deviations from the flight configuration. The TMR thermal testing/modeling effort is shown to provide technical corrections for the procedure outlined, and the need for validating predictive models is underscored.

  3. Curriculum Analytics: Application of Social Network Analysis for Improving Strategic Curriculum Decision-Making in a Research-Intensive University

    ERIC Educational Resources Information Center

    Dawson, Shane; Hubball, Harry

    2014-01-01

    This paper provides insight into the use of curriculum analytics to enhance learning-centred curricula in diverse higher education contexts. Engagement in evidence-based practice to evaluate and monitor curricula is vital to the success and sustainability of efforts to reform undergraduate and graduate programs. Emerging technology-enabled inquiry…

  4. Behind the Numbers: Why Web Analytics Matter to Your Institution

    ERIC Educational Resources Information Center

    Thayer, Shelby

    2011-01-01

    Web analytics measure, collect, analyze, and report Internet data that help website managers improve the effectiveness of the site and its marketing efforts by allowing them to better understand how users interact with the site. Applying this data can help drive the right people to the website and keep them there. According to Joshua Dodson, Web…

  5. Towards Adaptive Educational Assessments: Predicting Student Performance using Temporal Stability and Data Analytics in Learning Management Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thakur, Gautam; Olama, Mohammed M; McNair, Wade

    Data-driven assessments and adaptive feedback are becoming a cornerstone research in educational data analytics and involve developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the students and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present our efforts in using data analytics that enable educationists to design novel data-driven assessment and feedback mechanisms. In order to achieve this objective, we investigate temporal stabilitymore » of students grades and perform predictive analytics on academic data collected from 2009 through 2013 in one of the most commonly used learning management systems, called Moodle. First, we have identified the data features useful for assessments and predicting student outcomes such as students scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total Grade Point Average(GPA) at the same term they enrolled in the course. Second, time series models in both frequency and time domains are applied to characterize the progression as well as overall projections of the grades. In particular, the model analyzed the stability as well as fluctuation of grades among students during the collegiate years (from freshman to senior) and disciplines. Third, Logistic Regression and Neural Network predictive models are used to identify students as early as possible who are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. The time series analysis indicates that assessments and continuous feedback are critical for freshman and sophomores (even with easy courses) than for seniors, and those assessments may be provided using the predictive models. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy. Our results show that there are strong ties associated with the first few weeks for coursework and they have an impact on the design and distribution of individual modules.« less

  6. Incorporating Course-Based Undergraduate Research Experiences into Analytical Chemistry Laboratory Curricula

    ERIC Educational Resources Information Center

    Kerr, Melissa A.; Yan, Fei

    2016-01-01

    A continuous effort within an undergraduate university setting is to improve students' learning outcomes and thus improve students' attitudes about a particular field of study. This is undoubtedly relevant within a chemistry laboratory. This paper reports the results of an effort to introduce a problem-based learning strategy into the analytical…

  7. INVESTIGATION OF CE/LIF AS A TOOL IN THE CHARACTERIZATION OF SEWAGE EFFLUENT FOR FLUORESCENT ACIDICS: DETERMINATION OF SALICYLIC ACID

    EPA Science Inventory



    The investigation of emerging contaminant issues is a proactive effort in environmental analysis. As a part of this effort, sewage effluent is of current analytical interest because of the presence of pharmaceuticals and their metabolites and personal care products The en...

  8. [Economic Growth and Development].

    ERIC Educational Resources Information Center

    Clausen, A. W.

    Recent efforts of the World Bank to improve global economic problems are described, issues which will influence the role of the World Bank in the decade to come are discussed, and the Bank's future role is examined. Recent World Bank efforts to help developing nations include a lending program, project investments, analytical and advisory work,…

  9. Using Participatory System Dynamics Modeling to Examine the Local HIV Test and Treatment Care Continuum in Order to Reduce Community Viral Load.

    PubMed

    Weeks, Margaret R; Li, Jianghong; Lounsbury, David; Green, Helena Danielle; Abbott, Maryann; Berman, Marcie; Rohena, Lucy; Gonzalez, Rosely; Lang, Shawn; Mosher, Heather

    2017-12-01

    Achieving community-level goals to eliminate the HIV epidemic requires coordinated efforts through community consortia with a common purpose to examine and critique their own HIV testing and treatment (T&T) care system and build effective tools to guide their efforts to improve it. Participatory system dynamics (SD) modeling offers conceptual, methodological, and analytical tools to engage diverse stakeholders in systems conceptualization and visual mapping of dynamics that undermine community-level health outcomes and identify those that can be leveraged for systems improvement. We recruited and engaged a 25-member multi-stakeholder Task Force, whose members provide or utilize HIV-related services, to participate in SD modeling to examine and address problems of their local HIV T&T service system. Findings from the iterative model building sessions indicated Task Force members' increasingly complex understanding of the local HIV care system and demonstrated their improved capacity to visualize and critique multiple models of the HIV T&T service system and identify areas of potential leverage. Findings also showed members' enhanced communication and consensus in seeking deeper systems understanding and options for solutions. We discuss implications of using these visual SD models for subsequent simulation modeling of the T&T system and for other community applications to improve system effectiveness. © Society for Community Research and Action 2017.

  10. Accelerated tumor invasion under non-isotropic cell dispersal in glioblastomas

    NASA Astrophysics Data System (ADS)

    Fort, Joaquim; Solé, Ricard V.

    2013-05-01

    Glioblastomas are highly diffuse, malignant tumors that have so far evaded clinical treatment. The strongly invasive behavior of cells in these tumors makes them very resistant to treatment, and for this reason both experimental and theoretical efforts have been directed toward understanding the spatiotemporal pattern of tumor spreading. Although usual models assume a standard diffusion behavior, recent experiments with cell cultures indicate that cells tend to move in directions close to that of glioblastoma invasion, thus indicating that a biased random walk model may be much more appropriate. Here we show analytically that, for realistic parameter values, the speeds predicted by biased dispersal are consistent with experimentally measured data. We also find that models beyond reaction-diffusion-advection equations are necessary to capture this substantial effect of biased dispersal on glioblastoma spread.

  11. Using patient self-reports to study heterogeneity of treatment effects in major depressive disorder

    PubMed Central

    Kessler, R.C.; van Loo, H.M.; Wardenaar, K.J.; Bossarte, R.M.; Brenner, L.A.; Ebert, D.D; de Jonge, P.; Nierenberg, A.A.; Rosellini, A.J.; Sampson, N.A.; Schoevers, R.A.; Wilcox, M.A.; Zaslavsky, A.M.

    2016-01-01

    Aims Clinicians need guidance to address the heterogeneity of treatment responses of patients with major depressive disorder (MDD). While prediction schemes based on symptom clustering and biomarkers have so far not yielded results of sufficient strength to inform clinical decision-making, prediction schemes based on big data predictive analytic models might be more practically useful. Methods We review evidence suggesting that prediction equations based on symptoms and other easily-assessed clinical features found in previous research to predict MDD treatment outcomes might provide a foundation for developing predictive analytic clinical decision support models that could help clinicians select optimal (personalized) MDD treatments. These methods could also be useful in targeting patient subsamples for more expensive biomarker assessments. Results Approximately two dozen baseline variables obtained from medical records or patient reports have been found repeatedly in MDD treatment trials to predict overall treatment outcomes (i.e., intervention versus control) or differential treatment outcomes (i.e., intervention A versus intervention B). Similar evidence has been found in observational studies of MDD persistence-severity. However, no treatment studies have yet attempted to develop treatment outcome equations using the full set of these predictors. Promising preliminary empirical results coupled with recent developments in statistical methodology suggest that models could be developed to provide useful clinical decision support in personalized treatment selection. These tools could also provide a strong foundation to increase statistical power in focused studies of biomarkers and MDD heterogeneity of treatment response in subsequent controlled trials. Conclusions Coordinated efforts are needed to develop a protocol for systematically collecting information about established predictors of heterogeneity of MDD treatment response in large observational treatment studies, applying and refining these models in subsequent pragmatic trials, carrying out pooled secondary analyses to extract the maximum amount of information from these coordinated studies, and using this information to focus future discovery efforts in the segment of the patient population in which continued uncertainty about treatment response exists. PMID:26810628

  12. Using patient self-reports to study heterogeneity of treatment effects in major depressive disorder.

    PubMed

    Kessler, R C; van Loo, H M; Wardenaar, K J; Bossarte, R M; Brenner, L A; Ebert, D D; de Jonge, P; Nierenberg, A A; Rosellini, A J; Sampson, N A; Schoevers, R A; Wilcox, M A; Zaslavsky, A M

    2017-02-01

    Clinicians need guidance to address the heterogeneity of treatment responses of patients with major depressive disorder (MDD). While prediction schemes based on symptom clustering and biomarkers have so far not yielded results of sufficient strength to inform clinical decision-making, prediction schemes based on big data predictive analytic models might be more practically useful. We review evidence suggesting that prediction equations based on symptoms and other easily-assessed clinical features found in previous research to predict MDD treatment outcomes might provide a foundation for developing predictive analytic clinical decision support models that could help clinicians select optimal (personalised) MDD treatments. These methods could also be useful in targeting patient subsamples for more expensive biomarker assessments. Approximately two dozen baseline variables obtained from medical records or patient reports have been found repeatedly in MDD treatment trials to predict overall treatment outcomes (i.e., intervention v. control) or differential treatment outcomes (i.e., intervention A v. intervention B). Similar evidence has been found in observational studies of MDD persistence-severity. However, no treatment studies have yet attempted to develop treatment outcome equations using the full set of these predictors. Promising preliminary empirical results coupled with recent developments in statistical methodology suggest that models could be developed to provide useful clinical decision support in personalised treatment selection. These tools could also provide a strong foundation to increase statistical power in focused studies of biomarkers and MDD heterogeneity of treatment response in subsequent controlled trials. Coordinated efforts are needed to develop a protocol for systematically collecting information about established predictors of heterogeneity of MDD treatment response in large observational treatment studies, applying and refining these models in subsequent pragmatic trials, carrying out pooled secondary analyses to extract the maximum amount of information from these coordinated studies, and using this information to focus future discovery efforts in the segment of the patient population in which continued uncertainty about treatment response exists.

  13. ESIP Earth Sciences Data Analytics (ESDA) Cluster - Work in Progress

    NASA Technical Reports Server (NTRS)

    Kempler, Steven

    2015-01-01

    The purpose of this poster is to promote a common understanding of the usefulness of, and activities that pertain to, Data Analytics and more broadly, the Data Scientist; Facilitate collaborations to better understand the cross usage of heterogeneous datasets and to provide accommodating data analytics expertise, now and as the needs evolve into the future; Identify gaps that, once filled, will further collaborative activities. Objectives Provide a forum for Academic discussions that provides ESIP members a better understanding of the various aspects of Earth Science Data Analytics Bring in guest speakers to describe external efforts, and further teach us about the broader use of Data Analytics. Perform activities that:- Compile use cases generated from specific community needs to cross analyze heterogeneous data- Compile sources of analytics tools, in particular, to satisfy the needs of the above data users- Examine gaps between needs and sources- Examine gaps between needs and community expertise- Document specific data analytics expertise needed to perform Earth science data analytics Seek graduate data analytics Data Science student internship opportunities.

  14. Reducing the Analytical Bottleneck for Domain Scientists: Lessons from a Climate Data Visualization Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, Aritra; Poco, Jorge; Bertini, Enrico

    2016-01-01

    The gap between large-scale data production rate and the rate of generation of data-driven scientific insights has led to an analytical bottleneck in scientific domains like climate, biology, etc. This is primarily due to the lack of innovative analytical tools that can help scientists efficiently analyze and explore alternative hypotheses about the data, and communicate their findings effectively to a broad audience. In this paper, by reflecting on a set of successful collaborative research efforts between with a group of climate scientists and visualization researchers, we introspect how interactive visualization can help reduce the analytical bottleneck for domain scientists.

  15. Thermal performance modeling of NASA s scientific balloons

    NASA Astrophysics Data System (ADS)

    Franco, H.; Cathey, H.

    The flight performance of a scientific balloon is highly dependant on the interaction between the balloon and its environment. The balloon is a thermal vehicle. Modeling a scientific balloon's thermal performance has proven to be a difficult analytical task. Most previous thermal models have attempted these analyses by using either a bulk thermal model approach, or by simplified representations of the balloon. These approaches to date have provided reasonable, but not very accurate results. Improvements have been made in recent years using thermal analysis tools developed for the thermal modeling of spacecraft and other sophisticated heat transfer problems. These tools, which now allow for accurate modeling of highly transmissive materials, have been applied to the thermal analysis of NASA's scientific balloons. A research effort has been started that utilizes the "Thermal Desktop" addition to AUTO CAD. This paper will discuss the development of thermal models for both conventional and Ultra Long Duration super-pressure balloons. This research effort has focused on incremental analysis stages of development to assess the accuracy of the tool and the required model resolution to produce usable data. The first stage balloon thermal analyses started with simple spherical balloon models with a limited number of nodes, and expanded the number of nodes to determine required model resolution. These models were then modified to include additional details such as load tapes. The second stage analyses looked at natural shaped Zero Pressure balloons. Load tapes were then added to these shapes, again with the goal of determining the required modeling accuracy by varying the number of gores. The third stage, following the same steps as the Zero Pressure balloon efforts, was directed at modeling super-pressure pumpkin shaped balloons. The results were then used to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed full models. The development of the radiative environment and program input files, the development of the modeling techniques for balloons, and the development of appropriate data output handling techniques for both the raw data and data plots will be discussed. A general guideline to match predicted balloon performance with known flight data will also be presented. One long-term goal of this effort is to develop simplified approaches and techniques to include results in performance codes being developed.

  16. Nonlinear Pressurization and Modal Analysis Procedure for Dynamic Modeling of Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Smalley, Kurt B.; Tinker, Michael L.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    An introduction and set of guidelines for finite element dynamic modeling of nonrigidized inflatable structures is provided. A two-step approach is presented, involving 1) nonlinear static pressurization of the structure and updating of the stiffness matrix and 2) hear normal modes analysis using the updated stiffness. Advantages of this approach are that it provides physical realism in modeling of pressure stiffening, and it maintains the analytical convenience of a standard bear eigensolution once the stiffness has been modified. Demonstration of the approach is accomplished through the creation and test verification of an inflated cylinder model using a large commercial finite element code. Good frequency and mode shape comparisons are obtained with test data and previous modeling efforts, verifying the accuracy of the technique. Problems encountered in the application of the approach, as well as their solutions, are discussed in detail.

  17. Learjet Model 55 Wing Analysis with Landing Loads

    NASA Technical Reports Server (NTRS)

    Boroughs, R. R.

    1985-01-01

    The NASTRAN analysis was used to determine the impact of new landing loads on the Learjet Model 55 wing. These new landing loads were the result of a performance improvement effort to increase the landing weight of the aircraft to 18,000 lbs. from 17,000 lbs. and extend the life of the tires and brakes by incorporating larger tires and heavy duty brakes. Landing loads for the original 17,000 lb. airplane landing configuration were applied to the full airplane NASTRAN model. The analytical results were correlated with the strain gage data from the original landing load static tests. The landing loads for the 18,000 lb. airplane were applied to the full airplane NASTRAN model, and a comparison was made with the original Model 55 data. The results of this comparison enable Learjet to determine the difference in stress distribution in the wing due to these two different sets of landing loads.

  18. Machine learning of network metrics in ATLAS Distributed Data Management

    NASA Astrophysics Data System (ADS)

    Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration

    2017-10-01

    The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.

  19. Life cycles of transient planetary waves

    NASA Technical Reports Server (NTRS)

    Nathan, Terrence

    1993-01-01

    In recent years there has been an increasing effort devoted to understanding the physical and dynamical processes that govern the global-scale circulation of the atmosphere. This effort has been motivated, in part, from: (1) a wealth of new satellite data; (2) an urgent need to assess the potential impact of chlorofluorocarbons on our climate; (3) an inadequate understanding of the interactions between the troposphere and stratosphere and the role that such interactions play in short and long-term climate variability; and (4) the realization that addressing changes in our global climate requires understanding the interactions among various components of the earth system. The research currently being carried out represents an effort to address some of these issues by carrying out studies that combine radiation, ozone, seasonal thermal forcing and dynamics. Satellite and ground-based data that is already available is being used to construct basic states for our analytical and numerical models. Significant accomplishments from 1991-1992 are presented and include the following: ozone-dynamics interaction; (2) periodic local forcing and low frequency variability; and (3) steady forcing and low frequency variability.

  20. Total Quality Management (TQM), an Overview

    DTIC Science & Technology

    1991-09-01

    Quality Management (TQM). It discusses the reasons TQM is a current growth industry, what it is, and how one implements it. It describes the basic analytical tools, statistical process control, some advanced analytical tools, tools used by process improvement teams to enhance their own operations, and action plans for making improvements. The final sections discuss assessing quality efforts and measuring the quality to knowledge

  1. Learning Analytics in Small-Scale Teacher-Led Innovations: Ethical and Data Privacy Issues

    ERIC Educational Resources Information Center

    Rodríguez-Triana, María Jesús; Martínez-Monés, Alejandra; Villagrá-Sobrino, Sara

    2016-01-01

    As a further step towards maturity, the field of learning analytics (LA) is working on the definition of frameworks that structure the legal and ethical issues that scholars and practitioners must take into account when planning and applying LA solutions to their learning contexts. However, current efforts in this direction tend to be focused on…

  2. Analytical Tools for Behavioral Influences Operations

    DTIC Science & Technology

    2003-12-01

    NASIC’s Investment in Analytical Capabilities ....................................................... 56 6.2 Study Limitations...get started. This project is envisioned as a foundation for future work by NASIC analysts. They will use the tools identified in this study to...capabilities Though this study took all three categories into account, most (90%) of the focus for the SRA team’s effort was on identifying and analyzing

  3. Stochastic evolutionary dynamics in minimum-effort coordination games

    NASA Astrophysics Data System (ADS)

    Li, Kun; Cong, Rui; Wang, Long

    2016-08-01

    The minimum-effort coordination game draws recently more attention for the fact that human behavior in this social dilemma is often inconsistent with the predictions of classical game theory. Here, we combine evolutionary game theory and coalescence theory to investigate this game in finite populations. Both analytic results and individual-based simulations show that effort costs play a key role in the evolution of contribution levels, which is in good agreement with those observed experimentally. Besides well-mixed populations, set structured populations have also been taken into consideration. Therein we find that large number of sets and moderate migration rate greatly promote effort levels, especially for high effort costs.

  4. Sustainability and optimal control of an exploited prey predator system through provision of alternative food to predator.

    PubMed

    Kar, T K; Ghosh, Bapan

    2012-08-01

    In the present paper, we develop a simple two species prey-predator model in which the predator is partially coupled with alternative prey. The aim is to study the consequences of providing additional food to the predator as well as the effects of harvesting efforts applied to both the species. It is observed that the provision of alternative food to predator is not always beneficial to the system. A complete picture of the long run dynamics of the system is discussed based on the effort pair as control parameters. Optimal augmentations of prey and predator biomass at final time have been investigated by optimal control theory. Also the short and large time effects of the application of optimal control have been discussed. Finally, some numerical illustrations are given to verify our analytical results with the help of different sets of parameters. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Agreement for NASA/OAST - USAF/AFSC space interdependency on spacecraft environment interaction

    NASA Technical Reports Server (NTRS)

    Pike, C. P.; Stevens, N. J.

    1980-01-01

    A joint AF/NASA comprehensive program on spacecraft environment interactions consists of combined contractual and in house efforts aimed at understanding spacecraft environment ineraction phenomena and relating ground test results to space conditions. Activities include: (1) a concerted effort to identify project related environmental interactions; (2) a materials investigation to measure the basic properties of materials and develop or modify materials as needed; and (3) a ground simulation investigation to evaluate basic plasma interaction phenomena and provide inputs to the analytical modeling investigation. Systems performance is evaluated by both ground tests and analysis. There is an environmental impact investigation to determine the effect of future large spacecraft on the charged particle environment. Space flight investigations are planned to verify the results. The products of this program are test standards and design guidelines which summarize the technology, specify test criteria, and provide techniques to minimize or eliminate system interactions with the charged particle environment.

  6. UCLA IGPP Space Plasma Simulation Group

    NASA Technical Reports Server (NTRS)

    1998-01-01

    During the past 10 years the UCLA IGPP Space Plasma Simulation Group has pursued its theoretical effort to develop a Mission Oriented Theory (MOT) for the International Solar Terrestrial Physics (ISTP) program. This effort has been based on a combination of approaches: analytical theory, large scale kinetic (LSK) calculations, global magnetohydrodynamic (MHD) simulations and self-consistent plasma kinetic (SCK) simulations. These models have been used to formulate a global interpretation of local measurements made by the ISTP spacecraft. The regions of applications of the MOT cover most of the magnetosphere: the solar wind, the low- and high-latitude magnetospheric boundary, the near-Earth and distant magnetotail, and the auroral region. Most recent investigations include: plasma processes in the electron foreshock, response of the magnetospheric cusp, particle entry in the magnetosphere, sources of observed distribution functions in the magnetotail, transport of oxygen ions, self-consistent evolution of the magnetotail, substorm studies, effects of explosive reconnection, and auroral acceleration simulations.

  7. Analysis and Synthesis of Tonal Aircraft Noise Sources

    NASA Technical Reports Server (NTRS)

    Allen, Matthew P.; Rizzi, Stephen A.; Burdisso, Ricardo; Okcu, Selen

    2012-01-01

    Fixed and rotary wing aircraft operations can have a significant impact on communities in proximity to airports. Simulation of predicted aircraft flyover noise, paired with listening tests, is useful to noise reduction efforts since it allows direct annoyance evaluation of aircraft or operations currently in the design phase. This paper describes efforts to improve the realism of synthesized source noise by including short term fluctuations, specifically for inlet-radiated tones resulting from the fan stage of turbomachinery. It details analysis performed on an existing set of recorded turbofan data to isolate inlet-radiated tonal fan noise, then extract and model short term tonal fluctuations using the analytic signal. Methodologies for synthesizing time-variant tonal and broadband turbofan noise sources using measured fluctuations are also described. Finally, subjective listening test results are discussed which indicate that time-variant synthesized source noise is perceived to be very similar to recordings.

  8. Experimental Characterization and Micromechanical Modeling of Woven Carbon/Copper Composites

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Pauly, Christopher C.; Pindera, Marek-Jerzy

    1997-01-01

    The results of an extensive experimental characterization and a preliminary analytical modeling effort for the elastoplastic mechanical behavior of 8-harness satin weave carbon/copper (C/Cu) composites are presented. Previous experimental and modeling investigations of woven composites are discussed, as is the evolution of, and motivation for, the continuing research on C/Cu composites. Experimental results of monotonic and cyclic tension, compression, and Iosipescu shear tests, and combined tension-compression tests, are presented. With regard to the test results, emphasis is placed on the effect of strain gauge size and placement, the effect of alloying the copper matrix to improve fiber-matrix bonding, yield surface characterization, and failure mechanisms. The analytical methodology used in this investigation consists of an extension of the three-dimensional generalized method of cells (GMC-3D) micromechanics model, developed by Aboudi (1994), to include inhomogeneity and plasticity effects on the subcell level. The extension of the model allows prediction of the elastoplastic mechanical response of woven composites, as represented by a true repeating unit cell for the woven composite. The model is used to examine the effects of refining the representative geometry of the composite, altering the composite overall fiber volume fraction, changing the size and placement of the strain gauge with respect to the composite's reinforcement weave, and including porosity within the infiltrated fiber yarns on the in-plane elastoplastic tensile, compressive, and shear response of 8-harness satin C/Cu. The model predictions are also compared with the appropriate monotonic experimental results.

  9. Nature and Nurturing: Parenting in the Context of Child Temperament

    PubMed Central

    Kiff, Cara J.; Lengua, Liliana J.; Zalewski, Maureen

    2011-01-01

    Accounting for both bidirectional and interactive effects between parenting and child temperament can fine-tune theoretical models of the role of parenting and temperament in children's development of adjustment problems. Evidence for bidirectional and interactive effects between parenting and children's characteristics of frustration, fear, self-regulation, and impulsivity was reviewed, and an overall model of children's individual differences in response to parenting is proposed. In general, children high in frustration, impulsivity and low in effortful control are more vulnerable to the adverse effects of negative parenting, while in turn, many negative parenting behaviors predict increases in these characteristics. Frustration, fearfulness, and effortful control also appear to elicit parenting behaviors that can predict increases in these characteristics. Irritability renders children more susceptible to negative parenting behaviors. Fearfulness operates in a very complex manner, sometimes increasing children's responses to parenting behaviors and sometimes mitigating them and apparently operating differently across gender. Important directions for future research include the use of study designs and analytic approaches that account for the direction of effects and for developmental changes in parenting and temperament over time. PMID:21461681

  10. A change roadmap towards research paradigm in low-resource countries: retinoblastoma model in Egypt.

    PubMed

    Alfaar, Ahmad Samir; Nour, Radwa; Bakry, Mohamed Sabry; Kamal, Mohamed; Hassanain, Omneya; Labib, Rania M; Rashed, Wafaa M; Elzomor, Hossam; Alieldin, Adel; Taha, Hala; Zaghloul, Mohamed Saad; Ezzat, Sameera; AboElnaga, Sherif

    2017-02-01

    Research on childhood diseases represents a great global challenge. This challenge is maximized in both childhood cancer disciplines and developing world. In this paper, we aim at describing our institution experience in starting a structured childhood cancer research program in one of the developing countries in a short time based on philanthropic efforts. We used retinoblastoma as an example for what was conducted in this program. Starting in 2008, this program included improving clinical practice and its related supporting services besides developing new research services that both complement the clinical activities and pave the way towards creating a research foundation in the country. Results included developing hospital standard treatment protocols, developing national clinical trials, joining international consortia for childhood cancers clinical trials, developing data collection tools and real-time analytics, establishing a biobanking facility, and developing highly qualified team for conducting clinical, epidemiologic, and translational research studies. Moreover, this effort resulted in improving both clinical practice and patients' awareness nationally. This model can be used for other startup facilities that aim at finding answers for their national health problems in low-resource setting.

  11. Of mental models, assumptions and heuristics: The case of acids and acid strength

    NASA Astrophysics Data System (ADS)

    McClary, Lakeisha Michelle

    This study explored what cognitive resources (i.e., units of knowledge necessary to learn) first-semester organic chemistry students used to make decisions about acid strength and how those resources guided the prediction, explanation and justification of trends in acid strength. We were specifically interested in the identifying and characterizing the mental models, assumptions and heuristics that students relied upon to make their decisions, in most cases under time constraints. The views about acids and acid strength were investigated for twenty undergraduate students. Data sources for this study included written responses and individual interviews. The data was analyzed using a qualitative methodology to answer five research questions. Data analysis regarding these research questions was based on existing theoretical frameworks: problem representation (Chi, Feltovich & Glaser, 1981), mental models (Johnson-Laird, 1983); intuitive assumptions (Talanquer, 2006), and heuristics (Evans, 2008). These frameworks were combined to develop the framework from which our data were analyzed. Results indicated that first-semester organic chemistry students' use of cognitive resources was complex and dependent on their understanding of the behavior of acids. Expressed mental models were generated using prior knowledge and assumptions about acids and acid strength; these models were then employed to make decisions. Explicit and implicit features of the compounds in each task mediated participants' attention, which triggered the use of a very limited number of heuristics, or shortcut reasoning strategies. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength. Many students, however, were able to apply more effortful analytic reasoning, though correct trends were predicted infrequently. Most students continued to use their mental models, assumptions and heuristics to explain a given trend in acid strength and to justify their predicted trends, but the tasks influenced a few students to shift from one model to another model. An emergent finding from this project was that the problem representation greatly influenced students' ability to make correct predictions in acid strength.

  12. Activating analytic thinking enhances the value given to individualizing moral foundations.

    PubMed

    Yilmaz, Onurcan; Saribay, S Adil

    2017-08-01

    Two central debates within Moral Foundations Theory concern (1) which moral foundations are core and (2) how conflict between ideological camps stemming from valuing different moral foundations can be resolved. Previous studies have attempted to answer the first question by imposing cognitive load on participants to direct them toward intuitive and automatic thought. However, this method has limitations and has produced mixed findings. In the present research, in two experiments, instead of directing participants toward intuitive thought, we tested the effects of activating high-effort, analytic thought on participants' moral foundations. In both experiments, analytic thought activation caused participants to value individualizing foundations greater than the control condition. This effect was not qualified by participants' political orientation. No effect was observed on binding foundations. The results are consistent with the idea that upholding individualizing foundations requires mental effort and may provide the basis for reconciliation between different ideological camps. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Heat transfer with hockey-stick steam generator. [LMFBR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, E; Gabler, M J

    1977-11-01

    The hockey-stick modular design concept is a good answer to future needs for reliable, economic LMFBR steam generators. The concept was successfully demonstrated in the 30 Mwt MSG test unit; scaled up versions are currently in fabrication for CRBRP usage, and further scaling has been accomplished for PLBR applications. Design and performance characteristics are presented for the three generations of hockey-stick steam generators. The key features of the design are presented based on extensive analytical effort backed up by extensive ancillary test data. The bases for and actual performance evaluations are presented with emphasis on the CRBRP design. The designmore » effort on these units has resulted in the development of analytical techniques that are directly applicable to steam generators for any LMFBR application. In conclusion, the hockey-stick steam generator concept has been proven to perform both thermally and hydraulically as predicted. The heat transfer characteristics are well defined, and proven analytical techniques are available as are personnel experienced in their use.« less

  14. NASA advanced turboprop research and concept validation program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlow, J.B. Jr.; Sievers, G.K.

    1988-01-01

    NASA has determined by experimental and analytical effort that use of advanced turboprop propulsion instead of the conventional turbofans in the older narrow-body airline fleet could reduce fuel consumption for this type of aircraft by up to 50 percent. In cooperation with industry, NASA has defined and implemented an Advanced Turboprop (ATP) program to develop and validate the technology required for these new high-speed, multibladed, thin, swept propeller concepts. This paper presents an overview of the analysis, model-scale test, and large-scale flight test elements of the program together with preliminary test results, as available.

  15. Biologically inspired technologies using artificial muscles

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph

    2005-01-01

    One of the newest fields of biomimetics is the electroactive polymers (EAP) that are also known as artificial muscles. To take advantage of these materials, efforts are made worldwide to establish a strong infrastructure addressing the need for comprehensive analytical modeling of their response mechanism and develop effective processing and characterization techniques. The field is still in its emerging state and robust materials are still not readily available however in recent years significant progress has been made and commercial products have already started to appear. This paper covers the current state of- the-art and challenges to making artificial muscles and their potential biomimetic applications.

  16. Analytic gravitational waveforms for generic precessing compact binaries

    NASA Astrophysics Data System (ADS)

    Chatziioannou, Katerina; Klein, Antoine; Cornish, Neil; Yunes, Nicolas

    2017-01-01

    Gravitational waves from compact binaries are subject to amplitude and phase modulations arising from interactions between the angular momenta of the system. Failure to account for such spin-precession effects in gravitational wave data analysis could hinder detection and completely ruin parameter estimation. In this talk I will describe the construction of closed-form, frequency-domain waveforms for fully-precessing, quasi-circular binary inspirals. The resulting waveforms can model spinning binaries of arbitrary spin magnitudes, spin orientations, and masses during the inspiral phase. I will also describe ongoing efforts to extend these inspiral waveforms to the merger and ringdown phases.

  17. Learning the landscape: implementation challenges of primary care innovators around cancer survivorship care.

    PubMed

    O'Malley, Denalee; Hudson, Shawna V; Nekhlyudov, Larissa; Howard, Jenna; Rubinstein, Ellen; Lee, Heather S; Overholser, Linda S; Shaw, Amy; Givens, Sarah; Burton, Jay S; Grunfeld, Eva; Parry, Carly; Crabtree, Benjamin F

    2017-02-01

    This study describes the experiences of early implementers of primary care-focused cancer survivorship delivery models. Snowball sampling was used to identify innovators. Twelve participants (five cancer survivorship primary care innovators and seven content experts) attended a working conference focused on cancer survivorship population strategies and primary care transformation. Data included meeting discussion transcripts/field notes, transcribed in-depth innovator interviews, and innovators' summaries of care models. We used a multistep immersion/crystallization analytic approach, guided by a primary care organizational change model. Innovative practice models included: (1) a consultative model in a primary care setting; (2) a primary care physician (PCP)-led, blended consultative/panel-based model in an oncology setting; (3) an oncology nurse navigator in a primary care practice; and (4) two subspecialty models where PCPs in a general medical practice dedicated part of their patient panel to cancer survivors. Implementation challenges included (1) lack of key stakeholder buy-in; (2) practice resources allocated to competing (non-survivorship) change efforts; and (3) competition with higher priority initiatives incentivized by payers. Cancer survivorship delivery models are potentially feasible in primary care; however, significant barriers to widespread implementation exist. Implementation efforts would benefit from increasing the awareness and potential value-add of primary care-focused strategies to address survivors' needs. Current models of primary care-based cancer survivorship care may not be sustainable. Innovative strategies to provide quality care to this growing population of survivors need to be developed and integrated into primary care settings.

  18. Performance modeling for large database systems

    NASA Astrophysics Data System (ADS)

    Schaar, Stephen; Hum, Frank; Romano, Joe

    1997-02-01

    One of the unique approaches Science Applications International Corporation took to meet performance requirements was to start the modeling effort during the proposal phase of the Interstate Identification Index/Federal Bureau of Investigations (III/FBI) project. The III/FBI Performance Model uses analytical modeling techniques to represent the III/FBI system. Inputs to the model include workloads for each transaction type, record size for each record type, number of records for each file, hardware envelope characteristics, engineering margins and estimates for software instructions, memory, and I/O for each transaction type. The model uses queuing theory to calculate the average transaction queue length. The model calculates a response time and the resources needed for each transaction type. Outputs of the model include the total resources needed for the system, a hardware configuration, and projected inherent and operational availability. The III/FBI Performance Model is used to evaluate what-if scenarios and allows a rapid response to engineering change proposals and technical enhancements.

  19. Estimating estuarine salt intrusion using an analytical and a full hydrodynamic simulation - a comparison for the Ma Estuary

    NASA Astrophysics Data System (ADS)

    Nguyen, Duc Anh; Cat Vu, Minh; Willems, Patrick; Monbaliu, Jaak

    2017-04-01

    Salt intrusion is the most acute problem for irrigation water quality in coastal regions during dry seasons. The use of numerical hydrodynamic models is widespread and has become the prevailing approach to simulate the salinity distribution in an estuary. Despite its power to estimate both spatial and temporal salinity variations along the estuary, this approach also has its drawbacks. The high computational cost and the need for detailed hydrological, bathymetric and tidal datasets, put some limits on the usability in particular case studies. In poor data environments, analytical salt intrusion models are more widely used as they require less data and have a further reduction of the computational effort. There are few studies however where a more comprehensive comparison is made between the performance of a numerical hydrodynamic and an analytical model. In this research the multi-channel Ma Estuary in Vietnam is considered as a case study. Both the analytical and the hydrodynamic simulation approaches have been applied and were found capable to mimic the longitudinal salt distribution along the estuary. The data to construct the MIKE11 model include observations provided by a network of fixed hydrological stations and the cross-section measurements along the estuary. The analytic model is developed in parallel but based on information obtained from the hydrological network only (typical for poor data environment). Note that the two convergence length parameters of this simplified model are usually extracted from topography data including cross-sectional area and width along the estuary. Furthermore, freshwater discharge data are needed but these are gauged further upstream outside of the tidal region and unable to reflect the individual flows entering the multi-channel estuary. In order to tackle the poor data environment limitations, a new approach was needed to calibrate the two estuary geometry parameters of the parsimonious salt intrusion model. Compared to the values based on a field survey for the estuary, the calibrated cross-sectional convergence length values are in very high agreement. By assuming a linear relation between inverses of the individual flows entering the estuary and inverses of the sum of flows gauged further upstream, the individual flows can be assessed. Evaluation on the modeling approaches at high water slack shows that the two modeling approaches have similar results. They explain salinity distribution along the Ma Estuary reasonably well with Nash-Sutcliffe efficiency values at gauging stations along the estuary of 0.50 or higher. These performances demonstrate the predictive power of the simplified salt intrusion model and of the proposed parameter/input estimation approach, even with the poorer data.

  20. A comparative study on different methods of automatic mesh generation of human femurs.

    PubMed

    Viceconti, M; Bellingeri, L; Cristofolini, L; Toni, A

    1998-01-01

    The aim of this study was to evaluate comparatively five methods for automating mesh generation (AMG) when used to mesh a human femur. The five AMG methods considered were: mapped mesh, which provides hexahedral elements through a direct mapping of the element onto the geometry; tetra mesh, which generates tetrahedral elements from a solid model of the object geometry; voxel mesh which builds cubic 8-node elements directly from CT images; and hexa mesh that automatically generated hexahedral elements from a surface definition of the femur geometry. The various methods were tested against two reference models: a simplified geometric model and a proximal femur model. The first model was useful to assess the inherent accuracy of the meshes created by the AMG methods, since an analytical solution was available for the elastic problem of the simplified geometric model. The femur model was used to test the AMG methods in a more realistic condition. The femoral geometry was derived from a reference model (the "standardized femur") and the finite element analyses predictions were compared to experimental measurements. All methods were evaluated in terms of human and computer effort needed to carry out the complete analysis, and in terms of accuracy. The comparison demonstrated that each tested method deserves attention and may be the best for specific situations. The mapped AMG method requires a significant human effort but is very accurate and it allows a tight control of the mesh structure. The tetra AMG method requires a solid model of the object to be analysed but is widely available and accurate. The hexa AMG method requires a significant computer effort but can also be used on polygonal models and is very accurate. The voxel AMG method requires a huge number of elements to reach an accuracy comparable to that of the other methods, but it does not require any pre-processing of the CT dataset to extract the geometry and in some cases may be the only viable solution.

  1. Desensitized Optimal Filtering and Sensor Fusion Toolkit

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.

    2015-01-01

    Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.

  2. Learning dynamics in social dilemmas

    PubMed Central

    Macy, Michael W.; Flache, Andreas

    2002-01-01

    The Nash equilibrium, the main solution concept in analytical game theory, cannot make precise predictions about the outcome of repeated mixed-motive games. Nor can it tell us much about the dynamics by which a population of players moves from one equilibrium to another. These limitations, along with concerns about the cognitive demands of forward-looking rationality, have motivated efforts to explore backward-looking alternatives to analytical game theory. Most of the effort has been invested in evolutionary models of population dynamics. We shift attention to a learning-theoretic alternative. Computational experiments with adaptive agents identify a fundamental solution concept for social dilemmas–−stochastic collusion–−based on a random walk from a self-limiting noncooperative equilibrium into a self-reinforcing cooperative equilibrium. However, we show that this solution is viable only within a narrow range of aspiration levels. Below the lower threshold, agents are pulled into a deficient equilibrium that is a stronger attractor than mutual cooperation. Above the upper threshold, agents are dissatisfied with mutual cooperation. Aspirations that adapt with experience (producing habituation to stimuli) do not gravitate into the window of viability; rather, they are the worst of both worlds. Habituation destabilizes cooperation and stabilizes defection. Results from the two-person problem suggest that applications to multiplex and embedded relationships will yield unexpected insights into the global dynamics of cooperation in social dilemmas. PMID:12011402

  3. Qualitative mathematical models to support ecosystem-based management of Australia's Northern Prawn Fishery.

    PubMed

    Dambacher, Jeffrey M; Rothlisberg, Peter C; Loneragan, Neil R

    2015-01-01

    A major decline in the catch of the banana prawn [shrimp], Penaeus (Fenneropenaeus) merguiensis, occurred over a six-year period in the Weipa region of the northeastern Gulf of Carpentaria, Australia. Three main hypotheses have been developed to explain this decline: (1) prawn recruitment collapsed due to overfishing; (2) recruitment collapsed due to a change in the prawn's environment; and (3) adult banana prawns were still present, but fishers could no longer effectively find or catch them. Qualitative mathematical models were used to link population biology, environmental factors, and fishery dynamics to evaluate the alternative hypotheses. This modeling approach provides the means to rapidly integrate knowledge across disciplines and consider alternative hypotheses about how the structure and function of an ecosystem affects its dynamics. Alternative models were constructed to address the different hypotheses and also to encompass a diversity of opinion about the underlying dynamics of the system. Key findings from these analyses are that: instability in the system can arise when discarded fishery bycatch supports relatively high predation pressure; system stability can be enhanced by management of fishing effort or stock catchability; catch per unit effort is not necessarily a reliable indicator of stock abundance; a change in early-season rainfall should affect all stages in the banana prawn's life cycle; and a reduced catch in the Weipa region can create and reinforce a shift in fishing effort away from Weipa. Results from the models informed an approach to test the hypotheses (i.e., an experimental fishing program), and promoted understanding of the system among researchers, management agencies, and industry. The analytical tools developed in this work to address stages of a prawn life cycle and fishery dynamics are generally applicable to any exploited natural. resource.

  4. Evaluation Framework for NASA's Educational Outreach Programs

    NASA Technical Reports Server (NTRS)

    Berg, Rick; Booker, Angela; Linde, Charlotte; Preston, Connie

    1999-01-01

    The objective of the proposed work is to develop an evaluation framework for NASA's educational outreach efforts. We focus on public (rather than technical or scientific) dissemination efforts, specifically on Internet-based outreach sites for children.The outcome of this work is to propose both methods and criteria for evaluation, which would enable NASA to do a more analytic evaluation of its outreach efforts. The proposed framework is based on IRL's ethnographic and video-based observational methods, which allow us to analyze how these sites are actually used.

  5. Fast ray-tracing of human eye optics on Graphics Processing Units.

    PubMed

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Learning Analytics at Low Cost: At-Risk Student Prediction with Clicker Data and Systematic Proactive Interventions

    ERIC Educational Resources Information Center

    Choi, Samuel P. M.; Lam, S. S.; Li, Kam Cheong; Wong, Billy T. M.

    2018-01-01

    While learning analytics (LA) practices have been shown to be practical and effective, most of them require a huge amount of data and effort. This paper reports a case study which demonstrates the feasibility of practising LA at a low cost for instructors to identify at-risk students in an undergraduate business quantitative methods course.…

  7. PROVIDING PLANT DATA ANALYTICS THROUGH A SEAMLESS DIGITAL ENVIRONMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bly, Aaron; Oxstrand, Johanna

    As technology continues to evolve and become more integrated into a worker’s daily routine in the Nuclear Power industry the need for easy access to data becomes a priority. Not only does the need for data increase but the amount of data collected increases. In most cases the data is collected and stored in various software applications, many of which are legacy systems, which do not offer any other option to access the data except through the application’s user interface. Furthermore the data gets grouped in “silos” according to work function and not necessarily by subject. Hence, in order tomore » access all the information needed for a particular task or analysis one may have to access multiple applications to gather all the data needed. The industry and the research community have identified the need for a digital architecture and more importantly the need for a Seamless Digital Environment. An SDE provides a means to access multiple applications, gather the data points needed, conduct the analysis requested, and present the result to the user with minimal or no effort by the user. In addition, the nuclear utilities have identified the need for research focused on data analytics. The effort should develop and evaluate use cases for data mining and analytics for employing information from plant sensors and database for use in developing improved business analytics. Idaho National Laboratory is leading such effort, which is conducted in close collaboration with vendors, nuclear utilities, Institute of Nuclear Power Operations, and Electric Power Research Institute. The goal of the study is to research potential approaches to building an analytics solution for equipment reliability, on a small scale, focusing on either a single piece of equipment or a single system. The analytics solution will likely consist of a data integration layer, predictive and machine learning layer and the user interface layer that will display the output of the analysis in a straight forward, easy to consume manner. This paper will describe the study and the initial results.« less

  8. Combined monitoring, decision and control model for the human operator in a command and control desk

    NASA Technical Reports Server (NTRS)

    Muralidharan, R.; Baron, S.

    1978-01-01

    A report is given on the ongoing efforts to mode the human operator in the context of the task during the enroute/return phases in the ground based control of multiple flights of remotely piloted vehicles (RPV). The approach employed here uses models that have their analytical bases in control theory and in statistical estimation and decision theory. In particular, it draws heavily on the modes and the concepts of the optimal control model (OCM) of the human operator. The OCM is being extended into a combined monitoring, decision, and control model (DEMON) of the human operator by infusing decision theoretic notions that make it suitable for application to problems in which human control actions are infrequent and in which monitoring and decision-making are the operator's main activities. Some results obtained with a specialized version of DEMON for the RPV control problem are included.

  9. A model for gas and nutrient exchange in the chorionic vasculature system of the mouse placenta

    NASA Astrophysics Data System (ADS)

    Mirbod, Parisa; Sled, John

    2015-11-01

    The aim of this study is to develop an analytical model for the oxygen and nutrient transport from the umbilical cord to the small villous capillaries. The nutrient and carbon dioxide removal from the fetal cotyledons in the mouse placental system has also been considered. This model describes the mass transfer between the fetal and the maternal red blood cells in the chorionic arterial vasculature system. The model reveals the detail fetal vasculature system and its geometry and the precise mechanisms of mass transfer through the placenta. The dimensions of the villous capillaries, the total length of the villous trees, the total villi surface area, and the total resistance to mass transport in the fetal villous trees has also been defined. This is the first effort to explain the reason why there are at least 7 lobules in the mouse placenta from the fluid dynamics point of view.

  10. Rotary Motors Actuated by Traveling Ultrasonic Flexural Waves

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph; Bao, Xiaoqi; Grandia, Willem

    1999-01-01

    Efficient miniature actuators that are compact and consume low power are needed to drive space and planetary mechanisms in future NASA missions. Ultrasonic rotary motors have the potential to meet this NASA need and they are developed as actuators for miniature telerobotic applications. These motors have emerged in commercial products but they need to be adapted for operation at the harsh space environments that include cryogenic temperatures and vacuum and also require effective analytical tools for the design of efficient motors. A finite element analytical model was developed to examine the excitation of flexural plate wave traveling in a piezoelectrically actuated rotary motor. The model uses 3D finite element and equivalent circuit models that are applied to predict the excitation frequency and modal response of the stator. This model incorporates the details of the stator including the teeth, piezoelectric ceramic, geometry, bonding layer, etc. The theoretical predictions were corroborated experimentally for the stator. In parallel, efforts have been made to determine the thermal and vacuum performance of these motors. Experiments have shown that the motor can sustain at least 230 temperature cycles from 0 C to -90 C at 7 Torr pressure significant performance change. Also, in an earlier study the motor lasted over 334 hours at -150 C and vacuum. To explore telerobotic applications for USMs a robotic arm was constructed with such motors.

  11. Mathematical Modeling of Loop Heat Pipes with Multiple Capillary Pumps and Multiple Condensers. Part 1; Stead State Stimulations

    NASA Technical Reports Server (NTRS)

    Hoang, Triem T.; OConnell, Tamara; Ku, Jentung

    2004-01-01

    Loop Heat Pipes (LHPs) have proven themselves as reliable and robust heat transport devices for spacecraft thermal control systems. So far, the LHPs in earth-orbit satellites perform very well as expected. Conventional LHPs usually consist of a single capillary pump for heat acquisition and a single condenser for heat rejection. Multiple pump/multiple condenser LHPs have shown to function very well in ground testing. Nevertheless, the test results of a dual pump/condenser LHP also revealed that the dual LHP behaved in a complicated manner due to the interaction between the pumps and condensers. Thus it is redundant to say that more research is needed before they are ready for 0-g deployment. One research area that perhaps compels immediate attention is the analytical modeling of LHPs, particularly the transient phenomena. Modeling a single pump/single condenser LHP is difficult enough. Only a handful of computer codes are available for both steady state and transient simulations of conventional LHPs. No previous effort was made to develop an analytical model (or even a complete theory) to predict the operational behavior of the multiple pump/multiple condenser LHP systems. The current research project offered a basic theory of the multiple pump/multiple condenser LHP operation. From it, a computer code was developed to predict the LHP saturation temperature in accordance with the system operating and environmental conditions.

  12. Advanced methods of structural and trajectory analysis for transport aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.

    1995-01-01

    This report summarizes the efforts in two areas: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of trajectory optimization. The majority of the effort was spent in the structural weight area. A draft of 'Analytical Fuselage and Wing Weight Estimation of Transport Aircraft', resulting from this research, is included as an appendix.

  13. Recent Stirling engine loss-understanding results

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Thieme, Lanny G.; Dudenhoefer, James E.

    1990-01-01

    For several years, NASA and other U.S. government agencies have been funding experimental and analytical efforts to improve the understanding of Stirling thermodynamic losses. NASA's objective is to improve Stirling engine design capability to support the development of new engines for space power. An overview of these efforts was last given at the 1988 IECEC. Recent results of this research are reviewed.

  14. Dense Regions in Supersonic Isothermal Turbulence

    NASA Astrophysics Data System (ADS)

    Robertson, Brant; Goldreich, Peter

    2018-02-01

    The properties of supersonic isothermal turbulence influence a variety of astrophysical phenomena, including the structure and evolution of star-forming clouds. This work presents a simple model for the structure of dense regions in turbulence in which the density distribution behind isothermal shocks originates from rough hydrostatic balance between the pressure gradient behind the shock and its deceleration from ram pressure applied by the background fluid. Using simulations of supersonic isothermal turbulence and idealized waves moving through a background medium, we show that the structural properties of dense, shocked regions broadly agree with our analytical model. Our work provides a new conceptual picture for describing the dense regions, which complements theoretical efforts to understand the bulk statistical properties of turbulence and attempts to model the more complex features of star-forming clouds like magnetic fields, self-gravity, or radiative properties.

  15. Confirmatory factor analysis of posttraumatic stress symptoms in sexually harassed women.

    PubMed

    Palmieri, Patrick A; Fitzgerald, Louise F

    2005-12-01

    Posttraumatic stress disorder (PTSD) factor analytic research to date has not provided a clear consensus on the structure of posttraumatic stress symptoms. Seven hypothesized factor structures were evaluated using confirmatory factor analysis of the Posttraumatic Stress Disorder Checklist, a paper-and-pencil measure of posttraumatic stress symptom severity, in a sample of 1,218 women who experienced a broad range of workplace sexual harassment. The model specifying correlated re-experiencing, effortful avoidance, emotional numbing, and hyperarousal factors provided the best fit to the data. Virtually no support was obtained for the Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV; American Psychiatric Association, 1994) three-factor model of re-experiencing, avoidance, and hyperarousal factors. Different patterns of correlations with external variables were found for the avoidance and emotional numbing factors, providing further validation of the supported model.

  16. Accuracy of Binary Black Hole Waveform Models for Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team

    2016-03-01

    Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.

  17. Parametric Cost Models for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney

    2010-01-01

    Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.

  18. The European computer model for optronic system performance prediction (ECOMOS)

    NASA Astrophysics Data System (ADS)

    Keßler, Stefan; Bijl, Piet; Labarre, Luc; Repasi, Endre; Wittenstein, Wolfgang; Bürsing, Helge

    2017-10-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defence and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses and combines well-accepted existing European tools to build up a strong competitive position. This includes two TA models: the analytical TRM4 model and the image-based TOD model. In addition, it uses the atmosphere model MATISSE. In this paper, the central idea of ECOMOS is exposed. The overall software structure and the underlying models are shown and elucidated. The status of the project development is given as well as a short discussion of validation tests and an outlook on the future potential of simulation for sensor assessment.

  19. Low-frequency analogue Hawking radiation: The Korteweg-de Vries model

    NASA Astrophysics Data System (ADS)

    Coutant, Antonin; Weinfurtner, Silke

    2018-01-01

    We derive analytic expressions for the low-frequency properties of the analogue Hawking radiation in a general weak-dispersive medium. A thermal low-frequency part of the spectrum is expected even when dispersive effects become significant. We consider the two most common class of weak-dispersive media and investigate all possible anomalous scattering processes due inhomogeneous background flows. We first argue that under minimal assumptions, the scattering processes in near-critical flows are well described by a linearized Korteweg-de Vries equation. Within our theoretical model grey-body factors are neglected, that is, the mode comoving with the flow decouples from the other ones. We also exhibit a flow example with an exact expression for the effective temperature. We see that this temperature coincides with the Hawking one only when the dispersive length scale is much smaller than the flow gradient scale. We apply the same method in inhomogeneous flows without an analogue horizon. In this case, the spectrum coefficients decrease with decreasing frequencies. Our findings are in agreement with previous numerical works, generalizing their findings to arbitrary flow profiles. Our analytical expressions provide estimates to guide ongoing experimental efforts.

  20. Quality indicators in laboratory medicine: a fundamental tool for quality and patient safety.

    PubMed

    Plebani, Mario; Sciacovelli, Laura; Marinova, Mariela; Marcuccitti, Jessica; Chiozza, Maria Laura

    2013-09-01

    The identification of reliable quality indicators (QIs) is a crucial step in enabling users to quantify the quality of laboratory services. The current lack of attention to extra-laboratory factors is in stark contrast with the body of evidence pointing to the multitude of errors that continue to occur in the pre- and post-analytical phases. Different QIs and terminologies are currently used and, therefore, there is the need to harmonize proposed QIs. A model of quality indicators (MQI) has been consensually developed by a group of clinical laboratories according to a project launched by a working group of the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC). The model includes 57 QIs related to key processes (35 pre-, 7 intra- and 15 post-analytical phases) and 3 to support processes. The developed MQI and the data collected provide evidence of the feasibility of the project to harmonize currently available QIs, but further efforts should be done to involve more clinical laboratories and to collect a more consistent amount of data. Copyright © 2012 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  1. A Decision Analytic Approach to Exposure-Based Chemical Prioritization

    PubMed Central

    Mitchell, Jade; Pabon, Nicolas; Collier, Zachary A.; Egeghy, Peter P.; Cohen-Hubal, Elaine; Linkov, Igor; Vallero, Daniel A.

    2013-01-01

    The manufacture of novel synthetic chemicals has increased in volume and variety, but often the environmental and health risks are not fully understood in terms of toxicity and, in particular, exposure. While efforts to assess risks have generally been effective when sufficient data are available, the hazard and exposure data necessary to assess risks adequately are unavailable for the vast majority of chemicals in commerce. The US Environmental Protection Agency has initiated the ExpoCast Program to develop tools for rapid chemical evaluation based on potential for exposure. In this context, a model is presented in which chemicals are evaluated based on inherent chemical properties and behaviorally-based usage characteristics over the chemical’s life cycle. These criteria are assessed and integrated within a decision analytic framework, facilitating rapid assessment and prioritization for future targeted testing and systems modeling. A case study outlines the prioritization process using 51 chemicals. The results show a preliminary relative ranking of chemicals based on exposure potential. The strength of this approach is the ability to integrate relevant statistical and mechanistic data with expert judgment, allowing for an initial tier assessment that can further inform targeted testing and risk management strategies. PMID:23940664

  2. Optical bandgap of semiconductor nanostructures: Methods for experimental data analysis

    NASA Astrophysics Data System (ADS)

    Raciti, R.; Bahariqushchi, R.; Summonte, C.; Aydinli, A.; Terrasi, A.; Mirabella, S.

    2017-06-01

    Determination of the optical bandgap (Eg) in semiconductor nanostructures is a key issue in understanding the extent of quantum confinement effects (QCE) on electronic properties and it usually involves some analytical approximation in experimental data reduction and modeling of the light absorption processes. Here, we compare some of the analytical procedures frequently used to evaluate the optical bandgap from reflectance (R) and transmittance (T) spectra. Ge quantum wells and quantum dots embedded in SiO2 were produced by plasma enhanced chemical vapor deposition, and light absorption was characterized by UV-Vis/NIR spectrophotometry. R&T elaboration to extract the absorption spectra was conducted by two approximated methods (single or double pass approximation, single pass analysis, and double pass analysis, respectively) followed by Eg evaluation through linear fit of Tauc or Cody plots. Direct fitting of R&T spectra through a Tauc-Lorentz oscillator model is used as comparison. Methods and data are discussed also in terms of the light absorption process in the presence of QCE. The reported data show that, despite the approximation, the DPA approach joined with Tauc plot gives reliable results, with clear advantages in terms of computational efforts and understanding of QCE.

  3. Multi-Evaporator Miniature Loop Heat Pipe for Small Spacecraft Thermal Control. Part 2; Validation Results

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Ottenstein, Laura; Douglas, Donya; Hoang, Triem

    2010-01-01

    Under NASA s New Millennium Program Space Technology 8 (ST 8) Project, Goddard Space Fight Center has conducted a Thermal Loop experiment to advance the maturity of the Thermal Loop technology from proof of concept to prototype demonstration in a relevant environment , i.e. from a technology readiness level (TRL) of 3 to a level of 6. The thermal Loop is an advanced thermal control system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers designed for future small system applications requiring low mass, low power, and compactness. The MLHP retains all features of state-of-the-art loop heat pipes (LHPs) and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. An MLHP breadboard was built and tested in the laboratory and thermal vacuum environments for the TRL 4 and TRL 5 validations, respectively, and an MLHP proto-flight unit was built and tested in a thermal vacuum chamber for the TRL 6 validation. In addition, an analytical model was developed to simulate the steady state and transient behaviors of the MLHP during various validation tests. The MLHP demonstrated excellent performance during experimental tests and the analytical model predictions agreed very well with experimental data. All success criteria at various TRLs were met. Hence, the Thermal Loop technology has reached a TRL of 6. This paper presents the validation results, both experimental and analytical, of such a technology development effort.

  4. Can cloud point-based enrichment, preservation, and detection methods help to bridge gaps in aquatic nanometrology?

    PubMed

    Duester, Lars; Fabricius, Anne-Lena; Jakobtorweihen, Sven; Philippe, Allan; Weigl, Florian; Wimmer, Andreas; Schuster, Michael; Nazar, Muhammad Faizan

    2016-11-01

    Coacervate-based techniques are intensively used in environmental analytical chemistry to enrich and extract different kinds of analytes. Most methods focus on the total content or the speciation of inorganic and organic substances. Size fractionation is less commonly addressed. Within coacervate-based techniques, cloud point extraction (CPE) is characterized by a phase separation of non-ionic surfactants dispersed in an aqueous solution when the respective cloud point temperature is exceeded. In this context, the feature article raises the following question: May CPE in future studies serve as a key tool (i) to enrich and extract nanoparticles (NPs) from complex environmental matrices prior to analyses and (ii) to preserve the colloidal status of unstable environmental samples? With respect to engineered NPs, a significant gap between environmental concentrations and size- and element-specific analytical capabilities is still visible. CPE may support efforts to overcome this "concentration gap" via the analyte enrichment. In addition, most environmental colloidal systems are known to be unstable, dynamic, and sensitive to changes of the environmental conditions during sampling and sample preparation. This delivers a so far unsolved "sample preparation dilemma" in the analytical process. The authors are of the opinion that CPE-based methods have the potential to preserve the colloidal status of these instable samples. Focusing on NPs, this feature article aims to support the discussion on the creation of a convention called the "CPE extractable fraction" by connecting current knowledge on CPE mechanisms and on available applications, via the uncertainties visible and modeling approaches available, with potential future benefits from CPE protocols.

  5. The Fermi-Pasta-Ulam Problem and Its Underlying Integrable Dynamics: An Approach Through Lyapunov Exponents

    NASA Astrophysics Data System (ADS)

    Benettin, G.; Pasquali, S.; Ponno, A.

    2018-05-01

    FPU models, in dimension one, are perturbations either of the linear model or of the Toda model; perturbations of the linear model include the usual β -model, perturbations of Toda include the usual α +β model. In this paper we explore and compare two families, or hierarchies, of FPU models, closer and closer to either the linear or the Toda model, by computing numerically, for each model, the maximal Lyapunov exponent χ . More precisely, we consider statistically typical trajectories and study the asymptotics of χ for large N (the number of particles) and small ɛ (the specific energy E / N), and find, for all models, asymptotic power laws χ ˜eq Cɛ ^a, C and a depending on the model. The asymptotics turns out to be, in general, rather slow, and producing accurate results requires a great computational effort. We also revisit and extend the analytic computation of χ introduced by Casetti, Livi and Pettini, originally formulated for the β -model. With great evidence the theory extends successfully to all models of the linear hierarchy, but not to models close to Toda.

  6. CAA Annual Report Fiscal Year 1998.

    DTIC Science & Technology

    1998-12-01

    Studies , 3-1 Quick Reaction Analyses & Projects 3-1 4 TECHNOLOGY RESEARCH AND ANALYSIS SUPPORT 4-1 Technology Research 4-1 Methodology Research 4-2...Publications, Graphics, and Reproduction 5-2 6 ANALYTICAL EFFORTS COMPLETED BETWEEN FY90 AND FY98 6-1 Appendix A Annual Study , Work Evaluation...future. Chapter 2 highlights major studies and analysis activities which occurred in FY 98. Chapter 3 is the total package of analytical summaries

  7. Radiant Energy Measurements from a Scaled Jet Engine Axisymmetric Exhaust Nozzle for a Baseline Code Validation Case

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1994-01-01

    A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.

  8. Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle

    NASA Astrophysics Data System (ADS)

    Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.

    2018-05-01

    Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.

  9. Barriers to Achieving Economies of Scale in Analysis of EHR Data. A Cautionary Tale.

    PubMed

    Sendak, Mark P; Balu, Suresh; Schulman, Kevin A

    2017-08-09

    Signed in 2009, the Health Information Technology for Economic and Clinical Health Act infused $28 billion of federal funds to accelerate adoption of electronic health records (EHRs). Yet, EHRs have produced mixed results and have even raised concern that the current technology ecosystem stifles innovation. We describe the development process and report initial outcomes of a chronic kidney disease analytics application that identifies high-risk patients for nephrology referral. The cost to validate and integrate the analytics application into clinical workflow was $217,138. Despite the success of the program, redundant development and validation efforts will require $38.8 million to scale the application across all multihospital systems in the nation. We address the shortcomings of current technology investments and distill insights from the technology industry. To yield a return on technology investments, we propose policy changes that address the underlying issues now being imposed on the system by an ineffective technology business model.

  10. Density functional theory for molecular and periodic systems using density fitting and continuous fast multipole method: Analytical gradients.

    PubMed

    Łazarski, Roman; Burow, Asbjörn Manfred; Grajciar, Lukáš; Sierka, Marek

    2016-10-30

    A full implementation of analytical energy gradients for molecular and periodic systems is reported in the TURBOMOLE program package within the framework of Kohn-Sham density functional theory using Gaussian-type orbitals as basis functions. Its key component is a combination of density fitting (DF) approximation and continuous fast multipole method (CFMM) that allows for an efficient calculation of the Coulomb energy gradient. For exchange-correlation part the hierarchical numerical integration scheme (Burow and Sierka, Journal of Chemical Theory and Computation 2011, 7, 3097) is extended to energy gradients. Computational efficiency and asymptotic O(N) scaling behavior of the implementation is demonstrated for various molecular and periodic model systems, with the largest unit cell of hematite containing 640 atoms and 19,072 basis functions. The overall computational effort of energy gradient is comparable to that of the Kohn-Sham matrix formation. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Apparent annual survival estimates of tropical songbirds better reflect life history variation when based on intensive field methods

    USGS Publications Warehouse

    Martin, Thomas E.; Riordan, Margaret M.; Repin, Rimi; Mouton, James C.; Blake, William M.

    2017-01-01

    AimAdult survival is central to theories explaining latitudinal gradients in life history strategies. Life history theory predicts higher adult survival in tropical than north temperate regions given lower fecundity and parental effort. Early studies were consistent with this prediction, but standard-effort netting studies in recent decades suggested that apparent survival rates in temperate and tropical regions strongly overlap. Such results do not fit with life history theory. Targeted marking and resighting of breeding adults yielded higher survival estimates in the tropics, but this approach is thought to overestimate survival because it does not sample social and age classes with lower survival. We compared the effect of field methods on tropical survival estimates and their relationships with life history traits.LocationSabah, Malaysian Borneo.Time period2008–2016.Major taxonPasseriformes.MethodsWe used standard-effort netting and resighted individuals of all social and age classes of 18 tropical songbird species over 8 years. We compared apparent survival estimates between these two field methods with differing analytical approaches.ResultsEstimated detection and apparent survival probabilities from standard-effort netting were similar to those from other tropical studies that used standard-effort netting. Resighting data verified that a high proportion of individuals that were never recaptured in standard-effort netting remained in the study area, and many were observed breeding. Across all analytical approaches, addition of resighting yielded substantially higher survival estimates than did standard-effort netting alone. These apparent survival estimates were higher than for temperate zone species, consistent with latitudinal differences in life histories. Moreover, apparent survival estimates from addition of resighting, but not from standard-effort netting alone, were correlated with parental effort as measured by egg temperature across species.Main conclusionsInclusion of resighting showed that standard-effort netting alone can negatively bias apparent survival estimates and obscure life history relationships across latitudes and among tropical species.

  12. Graphite fiber textile preform/copper matrix composites

    NASA Technical Reports Server (NTRS)

    Gilatovs, G. J.; Lee, Bruce; Bass, Lowell

    1995-01-01

    Graphite fiber reinforced/copper matrix composites have sufficiently high thermal conduction to make them candidate materials for critical heat transmitting and rejection components. The term textile composites arises because the preform is braided from fiber tows, conferring three-dimensional reinforcement and near net shape. The principal issues investigated in the past two years have centered on developing methods to characterize the preform and fabricated composite and on braidability. It is necessary to have an analytic structural description for both processing and final property modeling. The structure of the true 3-D braids used is complex and has required considerable effort to model. A structural mapping has been developed as a foundation for analytic models for thermal conduction and mechanical properties. The conductivity has contributions both from the copper and the reinforcement. The latter is accomplished by graphitization of the fibers, the higher the amount of graphitization the greater the conduction. This is accompanied by an increase in the fiber modulus, which is desirable from a stiffness point of view but decreases the braidability; the highest conductivity fibers are simply too brittle to be braided. Considerable effort has been expended on determining the optimal braidability--conductivity region. While a number of preforms have been fabricated, one other complication intervenes; graphite and copper are immiscible, resulting in a poor mechanical bond and difficulties in infiltration by molten copper. The approach taken is to utilize a proprietary fiber coating process developed by TRA, of Salt Lake City, Utah, which forms an itermediary bond. A number of preforms have been fabricated from a variety of fiber types and two sets of these have been infiltrated with OFHC copper, one with the TRA coating and one without. Mechanical tests have been performed using a small-scale specimen method and show the coated specimens to have superior mechanical properties. Final batches of preforms, including a finned, near net shape tube, are being fabricated and will be infiltrated before summer.

  13. Stakeholder perspectives on decision-analytic modeling frameworks to assess genetic services policy.

    PubMed

    Guzauskas, Gregory F; Garrison, Louis P; Stock, Jacquie; Au, Sylvia; Doyle, Debra Lochner; Veenstra, David L

    2013-01-01

    Genetic services policymakers and insurers often make coverage decisions in the absence of complete evidence of clinical utility and under budget constraints. We evaluated genetic services stakeholder opinions on the potential usefulness of decision-analytic modeling to inform coverage decisions, and asked them to identify genetic tests for decision-analytic modeling studies. We presented an overview of decision-analytic modeling to members of the Western States Genetic Services Collaborative Reimbursement Work Group and state Medicaid representatives and conducted directed content analysis and an anonymous survey to gauge their attitudes toward decision-analytic modeling. Participants also identified and prioritized genetic services for prospective decision-analytic evaluation. Participants expressed dissatisfaction with current processes for evaluating insurance coverage of genetic services. Some participants expressed uncertainty about their comprehension of decision-analytic modeling techniques. All stakeholders reported openness to using decision-analytic modeling for genetic services assessments. Participants were most interested in application of decision-analytic concepts to multiple-disorder testing platforms, such as next-generation sequencing and chromosomal microarray. Decision-analytic modeling approaches may provide a useful decision tool to genetic services stakeholders and Medicaid decision-makers.

  14. A brief compendium of correlations and analytical formulae for the thermal field generated by a heat source embedded in porous and purely-conductive media

    NASA Astrophysics Data System (ADS)

    Conti, P.; Testi, D.; Grassi, W.

    2017-11-01

    This work reviews and compares suitable models for the thermal analysis of forced convection over a heat source in a porous medium. The set of available models refers to an infinite medium in which a fluid moves over different three heat source geometries: i.e. the moving infinite line source, the moving finite line source, and the moving infinite cylindrical source. In this perspective, the present work presents a plain and handy compendium of the above-mentioned models for forced external convection in porous media; besides, we propose a dimensionless analysis to figure out the reciprocal deviation among available models, helping the selection of the most suitable one in the specific case of interest. Under specific conditions, the advection term becomes ineffective in terms of heat transfer performances, allowing the use of purely-conductive models. For that reason, available analytical and numerical solutions for purely-conductive media are also reviewed and compared, again, by dimensionless criteria. Therefore, one can choose the simplest solution, with significant benefits in terms of computational effort and interpretation of the results. The main outcomes presented in the paper are: the conditions under which the system can be considered subject to a Darcy flow, the minimal distance beyond which the finite dimension of the heat source does not affect the thermal field, and the critical fluid velocity needed to have a significant contribution of the advection term in the overall heat transfer process.

  15. Real-time monitoring of a microbial electrolysis cell using an electrical equivalent circuit model.

    PubMed

    Hussain, S A; Perrier, M; Tartakovsky, B

    2018-04-01

    Efforts in developing microbial electrolysis cells (MECs) resulted in several novel approaches for wastewater treatment and bioelectrosynthesis. Practical implementation of these approaches necessitates the development of an adequate system for real-time (on-line) monitoring and diagnostics of MEC performance. This study describes a simple MEC equivalent electrical circuit (EEC) model and a parameter estimation procedure, which enable such real-time monitoring. The proposed approach involves MEC voltage and current measurements during its operation with periodic power supply connection/disconnection (on/off operation) followed by parameter estimation using either numerical or analytical solution of the model. The proposed monitoring approach is demonstrated using a membraneless MEC with flow-through porous electrodes. Laboratory tests showed that changes in the influent carbon source concentration and composition significantly affect MEC total internal resistance and capacitance estimated by the model. Fast response of these EEC model parameters to changes in operating conditions enables the development of a model-based approach for real-time monitoring and fault detection.

  16. Analytical and numerical modeling of the hearing system: Advances towards the assessment of hearing damage.

    PubMed

    De Paolis, Annalisa; Bikson, Marom; Nelson, Jeremy T; de Ru, J Alexander; Packer, Mark; Cardoso, Luis

    2017-06-01

    Hearing is an extremely complex phenomenon, involving a large number of interrelated variables that are difficult to measure in vivo. In order to investigate such process under simplified and well-controlled conditions, models of sound transmission have been developed through many decades of research. The value of modeling the hearing system is not only to explain the normal function of the hearing system and account for experimental and clinical observations, but to simulate a variety of pathological conditions that lead to hearing damage and hearing loss, as well as for development of auditory implants, effective ear protections and auditory hazard countermeasures. In this paper, we provide a review of the strategies used to model the auditory function of the external, middle, inner ear, and the micromechanics of the organ of Corti, along with some of the key results obtained from such modeling efforts. Recent analytical and numerical approaches have incorporated the nonlinear behavior of some parameters and structures into their models. Few models of the integrated hearing system exist; in particular, we describe the evolution of the Auditory Hazard Assessment Algorithm for Human (AHAAH) model, used for prediction of hearing damage due to high intensity sound pressure. Unlike the AHAAH model, 3D finite element models of the entire hearing system are not able yet to predict auditory risk and threshold shifts. It is expected that both AHAAH and FE models will evolve towards a more accurate assessment of threshold shifts and hearing loss under a variety of stimuli conditions and pathologies. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Deterrence and Risk Preferences in Sequential Attacker-Defender Games with Continuous Efforts.

    PubMed

    Payyappalli, Vineet M; Zhuang, Jun; Jose, Victor Richmond R

    2017-11-01

    Most attacker-defender games consider players as risk neutral, whereas in reality attackers and defenders may be risk seeking or risk averse. This article studies the impact of players' risk preferences on their equilibrium behavior and its effect on the notion of deterrence. In particular, we study the effects of risk preferences in a single-period, sequential game where a defender has a continuous range of investment levels that could be strategically chosen to potentially deter an attack. This article presents analytic results related to the effect of attacker and defender risk preferences on the optimal defense effort level and their impact on the deterrence level. Numerical illustrations and some discussion of the effect of risk preferences on deterrence and the utility of using such a model are provided, as well as sensitivity analysis of continuous attack investment levels and uncertainty in the defender's beliefs about the attacker's risk preference. A key contribution of this article is the identification of specific scenarios in which the defender using a model that takes into account risk preferences would be better off than a defender using a traditional risk-neutral model. This study provides insights that could be used by policy analysts and decisionmakers involved in investment decisions in security and safety. © 2017 Society for Risk Analysis.

  18. A model-based approach to determine the long-term effects of multiple interacting stressors on coral reefs.

    PubMed

    Blackwood, Julie C; Hastings, Alan; Mumby, Peter J

    2011-10-01

    The interaction between multiple stressors on Caribbean coral reefs, namely, fishing effort and hurricane impacts, is a key element in the future sustainability of reefs. We develop an analytic model of coral-algal interactions and explicitly consider grazing by herbivorous reef fish. Further, we consider changes in structural complexity, or rugosity, in addition to the direct impacts of hurricanes, which are implemented as stochastic jump processes. The model simulations consider various levels of fishing effort corresponding to' several hurricane frequencies and impact levels dependent on geographic location. We focus on relatively short time scales so we do not explicitly include changes in ocean temperature, chemistry, or sea level rise. The general features of our approach would, however, apply to these other stressors and to the management of other systems in the face of multiple stressors. It is determined that the appropriate management policy, either local reef restoration or fisheries management, greatly depends on hurricane frequency and impact level. For sufficiently low hurricane impact and macroalgal growth rate, our results indicate that regions with lower-frequency hurricanes require stricter fishing regulations, whereas management in regions with higher-frequency hurricanes might be less concerned with enhancing grazing and instead consider whether local-scale restorative activities to increase vertical structure are cost-effective.

  19. The lightcraft project

    NASA Technical Reports Server (NTRS)

    Messitt, Don G.; Myrabo, Leik N.

    1991-01-01

    Rensselaer Polytechnic Institute has been developing a transatmospheric 'Lightcraft' technology which uses beamed laser energy to propel advanced shuttle craft to orbit. In the past several years, Rensselaer students have analyzed the unique combined-cycle Lightcraft engine, designed a small unmanned Lightcraft Technology Demonstrator, and conceptualized larger manned Lightcraft - to name just a few of the interrelated design projects. The 1990-91 class carried out preliminary and detailed design efforts for a one-person 'Mercury' Lightcraft, using computer-aided design and finite-element structural modeling techniques. In addition, they began construction of a 2.6 m-diameter, full-scale engineering prototype mockup. The mockup will be equipped with three robotic legs that 'kneel' for passenger entry and exit. More importantly, the articulated tripod gear is crucial for accurately pointing at, and tracking the laser relay mirrors, a maneuver that must be performed just prior to liftoff. Also accomplished were further design improvements on a 6-inch-diameter Lightcraft model (for testing in RPI's hypersonic tunnel), and new laser propulsion experiments. The resultant experimental data will be used to calibrate Computational Fluid Dynamic (CFD) codes and analytical laser propulsion models that can simulate vehicle/engine flight conditions along a transatmospheric boost trajectory. These efforts will enable the prediction of distributed aerodynamic and thruster loads over the entire full-scale spacecraft.

  20. A Web-based Tool for Transparent, Collaborative Urban Water System Planning for Monterrey, Mexico

    NASA Astrophysics Data System (ADS)

    Rheinheimer, D. E.; Medellin-Azuara, J.; Garza Díaz, L. E.; Ramírez, A. I.

    2017-12-01

    Recent rapid advances in web technologies and cloud computing show great promise for facilitating collaboration and transparency in water planning efforts. Water resources planning is increasingly in the context of a rapidly urbanizing world, particularly in developing countries. In such countries with democratic traditions, the degree of transparency and collaboration in water planning can mean the difference between success and failure of water planning efforts. This is exemplified in the city of Monterrey, Mexico, where an effort to build a new long-distance aqueduct to increase water supply to the city dramatically failed due to lack of transparency and top-down planning. To help address, we used a new, web-based water system modeling platform, called OpenAgua, to develop a prototype decision support system for water planning in Monterrey. OpenAgua is designed to promote transparency and collaboration, as well as provide strong, cloud-based, water system modeling capabilities. We developed and assessed five water management options intended to increase water supply yield and/or reliability, a dominant water management concern in Latin America generally: 1) a new long-distance source (the previously-rejected project), 2) a new nearby reservoir, 3) expansion/re-operation of an existing major canal, 4) desalination, and 5) industrial water reuse. Using the integrated modeling and analytic capabilities of OpenAgua, and some customization, we assessed the performance of these options for water supply yield and reliability to help identify the most promising ones. In presenting this assessment, we demonstrate the viability of using online, cloud-based modeling systems for improving transparency and collaboration in decision making, reducing the gap between citizens, policy makers and water managers, and future directions.

  1. A framework for inference about carnivore density from unstructured spatial sampling of scat using detector dogs

    USGS Publications Warehouse

    Thompson, Craig M.; Royle, J. Andrew; Garner, James D.

    2012-01-01

    Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or mark–recapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the reality of small sample sizes and movement on and off study sites. In response to these difficulties, there is growing interest in the use of non-invasive survey techniques, which provide the opportunity to collect larger samples with minimal increases in effort, as well as the application of analytical frameworks that are not reliant on large sample size arguments. One promising survey technique, the use of scat detecting dogs, offers a greatly enhanced probability of detection while at the same time generating new difficulties with respect to non-standard survey routes, variable search intensity, and the lack of a fixed survey point for characterizing non-detection. In order to account for these issues, we modified an existing spatially explicit, capture–recapture model for camera trap data to account for variable search intensity and the lack of fixed, georeferenced trap locations. We applied this modified model to a fisher (Martes pennanti) dataset from the Sierra National Forest, California, and compared the results (12.3 fishers/100 km2) to more traditional density estimates. We then evaluated model performance using simulations at 3 levels of population density. Simulation results indicated that estimates based on the posterior mode were relatively unbiased. We believe that this approach provides a flexible analytical framework for reconciling the inconsistencies between detector dog survey data and density estimation procedures.

  2. Development of a robust analytical framework for assessing landbird trends, dynamics and relationships with environmental covariates in the North Coast and Cascades Network

    USGS Publications Warehouse

    Ray, Chris; Saracco, James; Jenkins, Kurt J.; Huff, Mark; Happe, Patricia J.; Ransom, Jason I.

    2017-01-01

    During 2015-2016, we completed development of a new analytical framework for landbird population monitoring data from the National Park Service (NPS) North Coast and Cascades Inventory and Monitoring Network (NCCN). This new tool for analysis combines several recent advances in modeling population status and trends using point-count data and is designed to supersede the approach previously slated for analysis of trends in the NCCN and other networks, including the Sierra Nevada Network (SIEN). Advances supported by the new model-based approach include 1) the use of combined data on distance and time of detection to estimate detection probability without assuming perfect detection at zero distance, 2) seamless accommodation of variation in sampling effort and missing data, and 3) straightforward estimation of the effects of downscaled climate and other local habitat characteristics on spatial and temporal trends in landbird populations. No changes in the current field protocol are necessary to facilitate the new analyses. We applied several versions of the new model to data from each of 39 species recorded in the three mountain parks of the NCCN, estimating trends and climate relationships for each species during 2005-2014. Our methods and results are also reported in a manuscript in revision for the journal Ecosphere (hereafter, Ray et al.). Here, we summarize the methods and results outlined in depth by Ray et al., discuss benefits of the new analytical framework, and provide recommendations for its application to synthetic analyses of long-term data from the NCCN and SIEN. All code necessary for implementing the new analyses is provided within the Appendices to this report, in the form of fully annotated scripts written in the open-access programming languages R and JAGS.

  3. The New NASA Orbital Debris Engineering Model ORDEM2000

    NASA Technical Reports Server (NTRS)

    Liou, Jer-Chyi; Matney, Mark J.; Anz-Meador, Phillip D.; Kessler, Donald; Jansen, Mark; Theall, Jeffery R.

    2002-01-01

    The NASA Orbital Debris Program Office at Johnson Space Center has developed a new computer-based orbital debris engineering model, ORDEM2000, which describes the orbital debris environment in the low Earth orbit region between 200 and 2000 km altitude. The model is appropriate for those engineering solutions requiring knowledge and estimates of the orbital debris environment (debris spatial density, flux, etc.). ORDEM2000 can also be used as a benchmark for ground-based debris measurements and observations. We incorporated a large set of observational data, covering the object size range from 10 mm to 10 m, into the ORDEM2000 debris database, utilizing a maximum likelihood estimator to convert observations into debris population probability distribution functions. These functions then form the basis of debris populations. We developed a finite element model to process the debris populations to form the debris environment. A more capable input and output structure and a user-friendly graphical user interface are also implemented in the model. ORDEM2000 has been subjected to a significant verification and validation effort. This document describes ORDEM2000, which supersedes the previous model, ORDEM96. The availability of new sensor and in situ data, as well as new analytical techniques, has enabled the construction of this new model. Section 1 describes the general requirements and scope of an engineering model. Data analyses and the theoretical formulation of the model are described in Sections 2 and 3. Section 4 describes the verification and validation effort and the sensitivity and uncertainty analyses. Finally, Section 5 describes the graphical user interface, software installation, and test cases for the user.

  4. Healthcare predictive analytics: An overview with a focus on Saudi Arabia.

    PubMed

    Alharthi, Hana

    2018-03-08

    Despite a newfound wealth of data and information, the healthcare sector is lacking in actionable knowledge. This is largely because healthcare data, though plentiful, tends to be inherently complex and fragmented. Health data analytics, with an emphasis on predictive analytics, is emerging as a transformative tool that can enable more proactive and preventative treatment options. This review considers the ways in which predictive analytics has been applied in the for-profit business sector to generate well-timed and accurate predictions of key outcomes, with a focus on key features that may be applicable to healthcare-specific applications. Published medical research presenting assessments of predictive analytics technology in medical applications are reviewed, with particular emphasis on how hospitals have integrated predictive analytics into their day-to-day healthcare services to improve quality of care. This review also highlights the numerous challenges of implementing predictive analytics in healthcare settings and concludes with a discussion of current efforts to implement healthcare data analytics in the developing country, Saudi Arabia. Copyright © 2018 The Author. Published by Elsevier Ltd.. All rights reserved.

  5. Validation of a Scalable Solar Sailcraft

    NASA Technical Reports Server (NTRS)

    Murphy, D. M.

    2006-01-01

    The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.

  6. Determination of initial fuel state and number of reactor shutdowns in archived low-burnup uranium targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byerly, Benjamin; Tandon, Lav; Hayes-Sterbenz, Anna

    This article presents a method for destructive analysis of irradiated uranium (U) targets, with a focus on collection and measurement of long-lived (t 1/2 > ~10 years) and stable fission product isotopes of ruthenium and cesium. Long-lived and stable isotopes of these elements can provide information on reactor conditions (e.g. flux, irradiation time, cooling time) in old samples (> 5–10 years) whose short-lived fission products have decayed away. The separation and analytical procedures were tested on archived U reactor targets at Los Alamos National Laboratory as part of an effort to evaluate reactor models at low-burnup.

  7. Determination of initial fuel state and number of reactor shutdowns in archived low-burnup uranium targets

    DOE PAGES

    Byerly, Benjamin; Tandon, Lav; Hayes-Sterbenz, Anna; ...

    2015-10-26

    This article presents a method for destructive analysis of irradiated uranium (U) targets, with a focus on collection and measurement of long-lived (t 1/2 > ~10 years) and stable fission product isotopes of ruthenium and cesium. Long-lived and stable isotopes of these elements can provide information on reactor conditions (e.g. flux, irradiation time, cooling time) in old samples (> 5–10 years) whose short-lived fission products have decayed away. The separation and analytical procedures were tested on archived U reactor targets at Los Alamos National Laboratory as part of an effort to evaluate reactor models at low-burnup.

  8. Dependence of defect introduction on temperature and resistivity and some long-term annealing effects

    NASA Technical Reports Server (NTRS)

    Brucker, G. J.

    1971-01-01

    The effort reported here represents data of lithium properties in bulk-silicon samples before and after irradiation for analytical information required to characterize the interactions of lithium with radiation-induced defects in silicon. A model of the damage and recovery mechanisms in irradiated-lithium-containing solar cells is developed based on making measurements of the Hall coefficient and resistivity of samples irradiated by 1-MeV electrons. Experiments on bulk samples included Hall coefficient and resistivity measurements taken as a function of: (1) bombardment temperature, (2) resistivity, (3) fluence, (4) oxygen concentration, and (5) annealing time at temperatures from 300 to 373 K.

  9. Some big ideas for some big problems.

    PubMed

    Winter, D D

    2000-05-01

    Although most psychologists do not see sustainability as a psychological problem, our environmental predicament is caused largely by human behaviors, accompanied by relevant thoughts, feelings, attitudes, and values. The huge task of building sustainable cultures will require a great many psychologists from a variety of backgrounds. In an effort to stimulate the imaginations of a wide spectrum of psychologists to take on the crucial problem of sustainability, this article discusses 4 psychological approaches (neo-analytic, behavioral, social, and cognitive) and outlines some of their insights into environmentally relevant behavior. These models are useful for illuminating ways to increase environmentally responsible behaviors of clients, communities, and professional associations.

  10. The EVOTION Decision Support System: Utilizing It for Public Health Policy-Making in Hearing Loss.

    PubMed

    Katrakazas, Panagiotis; Trenkova, Lyubov; Milas, Josip; Brdaric, Dario; Koutsouris, Dimitris

    2017-01-01

    As Decision Support Systems start to play a significant role in decision making, especially in the field of public-health policy making, we present an initial attempt to formulate such a system in the concept of public health policy making for hearing loss related problems. Justification for the system's conceptual architecture and its key functionalities are presented. The introduction of the EVOTION DSS sets a key innovation and a basis for paradigm shift in policymaking, by incorporating relevant models, big data analytics and generic demographic data. Expected outcomes for this joint effort are discussed from a public-health point of view.

  11. Sociocultural Behavior Influence Modelling & Assessment: Current Work and Research Frontiers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernard, Michael Lewis

    A common problem associated with the effort to better assess potential behaviors of various individuals within different countries is the shear difficulty in comprehending the dynamic nature of populations, particularly over time and considering feedback effects. This paper discusses a theory-based analytical capability designed to enable analysts to better assess the influence of events on individuals interacting within a country or region. These events can include changes in policy, man-made or natural disasters, migration, war, or other changes in environmental/economic conditions. In addition, this paper describes potential extensions of this type of research to enable more timely and accurate assessments.

  12. Large space structures controls research and development at Marshall Space Flight Center: Status and future plans

    NASA Technical Reports Server (NTRS)

    Buchanan, H. J.

    1983-01-01

    Work performed in Large Space Structures Controls research and development program at Marshall Space Flight Center is described. Studies to develop a multilevel control approach which supports a modular or building block approach to the buildup of space platforms are discussed. A concept has been developed and tested in three-axis computer simulation utilizing a five-body model of a basic space platform module. Analytical efforts have continued to focus on extension of the basic theory and subsequent application. Consideration is also given to specifications to evaluate several algorithms for controlling the shape of Large Space Structures.

  13. The Ophidia framework: toward cloud-based data analytics for climate change

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; D'Anca, Alessandro; Elia, Donatello; Mancini, Marco; Mariello, Andrea; Mirto, Maria; Palazzo, Cosimo; Aloisio, Giovanni

    2015-04-01

    The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in the climate change domain. It provides parallel (server-side) data analysis, an internal storage model and a hierarchical data organization to manage large amount of multidimensional scientific data. The Ophidia analytics platform provides several MPI-based parallel operators to manipulate large datasets (data cubes) and array-based primitives to perform data analysis on large arrays of scientific data. The most relevant data analytics use cases implemented in national and international projects target fire danger prevention (OFIDIA), interactions between climate change and biodiversity (EUBrazilCC), climate indicators and remote data analysis (CLIP-C), sea situational awareness (TESSA), large scale data analytics on CMIP5 data in NetCDF format, Climate and Forecast (CF) convention compliant (ExArch). Two use cases regarding the EU FP7 EUBrazil Cloud Connect and the INTERREG OFIDIA projects will be presented during the talk. In the former case (EUBrazilCC) the Ophidia framework is being extended to integrate scalable VM-based solutions for the management of large volumes of scientific data (both climate and satellite data) in a cloud-based environment to study how climate change affects biodiversity. In the latter one (OFIDIA) the data analytics framework is being exploited to provide operational support regarding processing chains devoted to fire danger prevention. To tackle the project challenges, data analytics workflows consisting of about 130 operators perform, among the others, parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, import/export of datasets in NetCDF format. Finally, the entire Ophidia software stack has been deployed at CMCC on 24-nodes (16-cores/node) of the Athena HPC cluster. Moreover, a cloud-based release tested with OpenNebula is also available and running in the private cloud infrastructure of the CMCC Supercomputing Centre.

  14. Validation of a Deterministic Vibroacoustic Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Caimi, Raoul E.; Margasahayam, Ravi

    1997-01-01

    This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.

  15. Structural design of the Sandia 34-M Vertical Axis Wind Turbine

    NASA Astrophysics Data System (ADS)

    Berg, D. E.

    Sandia National Laboratories, as the lead DOE laboratory for Vertical Axis Wind Turbine (VAWT) development, is currently designing a 34-meter diameter Darrieus-type VAWT. This turbine will be a research test bed which provides a focus for advancing technology and validating design and fabrication techniques in a size range suitable for utility use. Structural data from this machine will allow structural modeling to be refined and verified for a turbine on which the gravity effects and stochastic wind loading are significant. Performance data from it will allow aerodynamic modeling to be refined and verified. The design effort incorporates Sandia's state-of-the-art analysis tools in the design of a complete machine. The analytic tools used in this design are discussed and the conceptual design procedure is described.

  16. Optimal cooperative control synthesis of active displays

    NASA Technical Reports Server (NTRS)

    Garg, S.; Schmidt, D. K.

    1985-01-01

    A technique is developed that is intended to provide a systematic approach to synthesizing display augmentation for optimal manual control in complex, closed-loop tasks. A cooperative control synthesis technique, previously developed to design pilot-optimal control augmentation for the plant, is extended to incorporate the simultaneous design of performance enhancing displays. The technique utilizes an optimal control model of the man in the loop. It is applied to the design of a quickening control law for a display and a simple K/s(2) plant, and then to an F-15 type aircraft in a multi-channel task. Utilizing the closed loop modeling and analysis procedures, the results from the display design algorithm are evaluated and an analytical validation is performed. Experimental validation is recommended for future efforts.

  17. An Overview of the Launch Vehicle Blast Environments Development Efforts

    NASA Technical Reports Server (NTRS)

    Richardson, Erin; Bangham, Mike; Blackwood, James; Skinner, Troy; Hays, Michael; Jackson, Austin; Richman, Ben

    2014-01-01

    NASA has been funding an ongoing development program to characterize the explosive environments produced during a catastrophic launch vehicle accident. These studies and small-scale tests are focused on the near field environments that threaten the crew. The results indicate that these environments are unlikely to result in immediate destruction of the crew modules. The effort began as an independent assessment by NASA safety organizations, followed by the Ares program and NASA Engineering and Safety Center and now as a Space Launch Systems (SLS) focused effort. The development effort is using the test and accident data available from public or NASA sources as well as focused scaled tests that are examining the fundamental aspects of uncontained explosions of Hydrogen and air and Hydrogen and Oxygen. The primary risk to the crew appears to be the high-energy fragments and these are being characterized for the SLS. The development efforts will characterize the thermal environment of the explosions as well to ensure that the risk is well understood and to document the overall energy balance of an explosion. The effort is multi-path in that analytical, computational and focused testing is being used to develop the knowledge to understand potential SLS explosions. This is an ongoing program with plans that expand the development from fundamental testing at small-scale levels to large-scale tests that can be used to validate models for commercial programs. The ultimate goal is to develop a knowledge base that can be used by vehicle designers to maximize crew survival in an explosion.

  18. Modeling interfacial fracture in Sierra.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Arthur A.; Ohashi, Yuki; Lu, Wei-Yang

    2013-09-01

    This report summarizes computational efforts to model interfacial fracture using cohesive zone models in the SIERRA/SolidMechanics (SIERRA/SM) finite element code. Cohesive surface elements were used to model crack initiation and propagation along predefined paths. Mesh convergence was observed with SIERRA/SM for numerous geometries. As the funding for this project came from the Advanced Simulation and Computing Verification and Validation (ASC V&V) focus area, considerable effort was spent performing verification and validation. Code verification was performed to compare code predictions to analytical solutions for simple three-element simulations as well as a higher-fidelity simulation of a double-cantilever beam. Parameter identification was conductedmore » with Dakota using experimental results on asymmetric double-cantilever beam (ADCB) and end-notched-flexure (ENF) experiments conducted under Campaign-6 funding. Discretization convergence studies were also performed with respect to mesh size and time step and an optimization study was completed for mode II delamination using the ENF geometry. Throughout this verification process, numerous SIERRA/SM bugs were found and reported, all of which have been fixed, leading to over a 10-fold increase in convergence rates. Finally, mixed-mode flexure experiments were performed for validation. One of the unexplained issues encountered was material property variability for ostensibly the same composite material. Since the variability is not fully understood, it is difficult to accurately assess uncertainty when performing predictions.« less

  19. On the pursuit of a nuclear development capability: The case of the Cuban nuclear program

    NASA Astrophysics Data System (ADS)

    Benjamin-Alvarado, Jonathan Calvert

    1998-09-01

    While there have been many excellent descriptive accounts of modernization schemes in developing states, energy development studies based on prevalent modernization theory have been rare. Moreover, heretofore there have been very few analyses of efforts to develop a nuclear energy capability by developing states. Rarely have these analyses employed social science research methodologies. The purpose of this study was to develop a general analytical framework, based on such a methodology to analyze nuclear energy development and to utilize this framework for the study of the specific case of Cuba's decision to develop nuclear energy. The analytical framework developed focuses on a qualitative tracing of the process of Cuban policy objectives and implementation to develop a nuclear energy capability, and analyzes the policy in response to three models of modernization offered to explain the trajectory of policy development. These different approaches are the politically motivated modernization model, the economic and technological modernization model and the economic and energy security model. Each model provides distinct and functionally differentiated expectations for the path of development toward this objective. Each model provides expected behaviors to external stimuli that would result in specific policy responses. In the study, Cuba's nuclear policy responses to stimuli from domestic constraints and intensities, institutional development, and external influences are analyzed. The analysis revealed that in pursuing the nuclear energy capability, Cuba primarily responded by filtering most of the stimuli through the twin objectives of economic rationality and technological advancement. Based upon the Cuban policy responses to the domestic and international stimuli, the study concluded that the economic and technological modernization model of nuclear energy development offered a more complete explanation of the trajectory of policy development than either the politically-motivated or economic and energy security models. The findings of this case pose some interesting questions for the general study of energy programs in developing states. By applying the analytical framework employed in this study to a number of other cases, perhaps the understanding of energy development schemes may be expanded through future research.

  20. Relationship between psychosocial stress dimensions and salivary cortisol in military police officers 1

    PubMed Central

    Tavares, Juliana Petri; Lautert, Liana; Magnago, Tânia Solange Bosi de Souza; Consiglio, Angélica Rosat; Pai, Daiane Dal

    2017-01-01

    Abstract Objective: to analyze the relationship between psychosocial stress dimensions and salivary cortisol in military police officers. Method: cross-sectional and analytical study with 134 military police officers. The Effort-Reward Imbalance (ERI) Model scale has been used to assess psychosocial stress. Salivary cortisol was collected in three samples. The following tests were used: Student's t-test, Mann-Whitney, ANOVA, Bonferroni, Kruskal-Wallis and Dunn. Pearson and Spearman correlation methods were used, as well as multiple linear regression. Cortisol at night showed an ascending statistical association with the psychosocial reward (p=0.004) and a descending association with the effort-impairment scores (p=0.017). Being part of the Special Tactical Operations Group (GATE) and the diastolic blood pressure explained 13.5% of the variation in cortisol levels on waking up. The sectors GATE, Special Patrol of the Elite Squad of the Military Police and Motorcyclists explained 21.9% of the variation in cortisol levels 30-minute after awakening. The variables GATE sector and Effort Dimension explained 27.7% of the variation in cortisol levels at night. Conclusion: it was evidenced that salivary cortisol variation was influenced by individual, labor and psychosocial variables. PMID:28443994

  1. Relationship between psychosocial stress dimensions and salivary cortisol in military police officers.

    PubMed

    Tavares, Juliana Petri; Lautert, Liana; Magnago, Tânia Solange Bosi de Souza; Consiglio, Angélica Rosat; Pai, Daiane Dal

    2017-04-20

    to analyze the relationship between psychosocial stress dimensions and salivary cortisol in military police officers. cross-sectional and analytical study with 134 military police officers. The Effort-Reward Imbalance (ERI) Model scale has been used to assess psychosocial stress. Salivary cortisol was collected in three samples. The following tests were used: Student's t-test, Mann-Whitney, ANOVA, Bonferroni, Kruskal-Wallis and Dunn. Pearson and Spearman correlation methods were used, as well as multiple linear regression. Cortisol at night showed an ascending statistical association with the psychosocial reward (p=0.004) and a descending association with the effort-impairment scores (p=0.017). Being part of the Special Tactical Operations Group (GATE) and the diastolic blood pressure explained 13.5% of the variation in cortisol levels on waking up. The sectors GATE, Special Patrol of the Elite Squad of the Military Police and Motorcyclists explained 21.9% of the variation in cortisol levels 30-minute after awakening. The variables GATE sector and Effort Dimension explained 27.7% of the variation in cortisol levels at night. it was evidenced that salivary cortisol variation was influenced by individual, labor and psychosocial variables.

  2. Hydrogen-fueled scramjets: Potential for detailed combustor analysis

    NASA Technical Reports Server (NTRS)

    Beach, H. L., Jr.

    1976-01-01

    Combustion research related to hypersonic scramjet (supersonic combustion ramjet) propulsion is discussed from the analytical point of view. Because the fuel is gaseous hydrogen, mixing is single phase and the chemical kinetics are well known; therefore, the potential for analysis is good relative to hydro-carbon fueled engines. Recent progress in applying two and three dimensional analytical techniques to mixing and reacting flows indicates cause for optimism, and identifies several areas for continuing effort.

  3. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  4. Advanced Modeling and Uncertainty Quantification for Flight Dynamics; Interim Results and Challenges

    NASA Technical Reports Server (NTRS)

    Hyde, David C.; Shweyk, Kamal M.; Brown, Frank; Shah, Gautam

    2014-01-01

    As part of the NASA Vehicle Systems Safety Technologies (VSST), Assuring Safe and Effective Aircraft Control Under Hazardous Conditions (Technical Challenge #3), an effort is underway within Boeing Research and Technology (BR&T) to address Advanced Modeling and Uncertainty Quantification for Flight Dynamics (VSST1-7). The scope of the effort is to develop and evaluate advanced multidisciplinary flight dynamics modeling techniques, including integrated uncertainties, to facilitate higher fidelity response characterization of current and future aircraft configurations approaching and during loss-of-control conditions. This approach is to incorporate multiple flight dynamics modeling methods for aerodynamics, structures, and propulsion, including experimental, computational, and analytical. Also to be included are techniques for data integration and uncertainty characterization and quantification. This research shall introduce new and updated multidisciplinary modeling and simulation technologies designed to improve the ability to characterize airplane response in off-nominal flight conditions. The research shall also introduce new techniques for uncertainty modeling that will provide a unified database model comprised of multiple sources, as well as an uncertainty bounds database for each data source such that a full vehicle uncertainty analysis is possible even when approaching or beyond Loss of Control boundaries. Methodologies developed as part of this research shall be instrumental in predicting and mitigating loss of control precursors and events directly linked to causal and contributing factors, such as stall, failures, damage, or icing. The tasks will include utilizing the BR&T Water Tunnel to collect static and dynamic data to be compared to the GTM extended WT database, characterizing flight dynamics in off-nominal conditions, developing tools for structural load estimation under dynamic conditions, devising methods for integrating various modeling elements into a real-time simulation capability, generating techniques for uncertainty modeling that draw data from multiple modeling sources, and providing a unified database model that includes nominal plus increments for each flight condition. This paper presents status of testing in the BR&T water tunnel and analysis of the resulting data and efforts to characterize these data using alternative modeling methods. Program challenges and issues are also presented.

  5. Duct flow nonuniformities study for space shuttle main engine

    NASA Technical Reports Server (NTRS)

    Thoenes, J.

    1985-01-01

    To improve the Space Shuttle Main Engine (SSME) design and for future use in the development of generation rocket engines, a combined experimental/analytical study was undertaken with the goals of first, establishing an experimental data base for the flow conditions in the SSME high pressure fuel turbopump (HPFTP) hot gas manifold (HGM) and, second, setting up a computer model of the SSME HGM flow field. Using the test data to verify the computer model it should be possible in the future to computationally scan contemplated advanced design configurations and limit costly testing to the most promising design. The effort of establishing and using the computer model is detailed. The comparison of computational results and experimental data observed clearly demonstrate that computational fluid mechanics (CFD) techniques can be used successfully to predict the gross features of three dimensional fluid flow through configurations as intricate as the SSME turbopump hot gas manifold.

  6. Stability and Bifurcation of a Fishery Model with Crowley-Martin Functional Response

    NASA Astrophysics Data System (ADS)

    Maiti, Atasi Patra; Dubey, B.

    To understand the dynamics of a fishery system, a nonlinear mathematical model is proposed and analyzed. In an aquatic environment, we considered two populations: one is prey and another is predator. Here both the fish populations grow logistically and interaction between them is of Crowley-Martin type functional response. It is assumed that both the populations are harvested and the harvesting effort is assumed to be dynamical variable and tax is considered as a control variable. The existence of equilibrium points and their local stability are examined. The existence of Hopf-bifurcation, stability and direction of Hopf-bifurcation are also analyzed with the help of Center Manifold theorem and normal form theory. The global stability behavior of the positive equilibrium point is also discussed. In order to find the value of optimal tax, the optimal harvesting policy is used. To verify our analytical findings, an extensive numerical simulation is carried out for this model system.

  7. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  8. Basin-scale hydrogeologic modeling

    NASA Astrophysics Data System (ADS)

    Person, Mark; Raffensperger, Jeff P.; Ge, Shemin; Garven, Grant

    1996-02-01

    Mathematical modeling of coupled groundwater flow, heat transfer, and chemical mass transport at the sedimentary basin scale has been increasingly used by Earth scientists studying a wide range of geologic processes including the formation of excess pore pressures, infiltration-driven metamorphism, heat flow anomalies, nuclear waste isolation, hydrothermal ore genesis, sediment diagenesis, basin tectonics, and petroleum generation and migration. These models have provided important insights into the rates and pathways of groundwater migration through basins, the relative importance of different driving mechanisms for fluid flow, and the nature of coupling between the hydraulic, thermal, chemical, and stress regimes. The mathematical descriptions of basin transport processes, the analytical and numerical solution methods employed, and the application of modeling to sedimentary basins around the world are the subject of this review paper. The special considerations made to represent coupled transport processes at the basin scale are emphasized. Future modeling efforts will probably utilize three-dimensional descriptions of transport processes, incorporate greater information regarding natural geological heterogeneity, further explore coupled processes, and involve greater field applications.

  9. Conceptualizing a Genomics Software Institute (GSI)

    PubMed Central

    Gilbert, Jack A.; Catlett, Charlie; Desai, Narayan; Knight, Rob; White, Owen; Robbins, Robert; Sankaran, Rajesh; Sansone, Susanna-Assunta; Field, Dawn; Meyer, Folker

    2012-01-01

    Microbial ecology has been enhanced greatly by the ongoing ‘omics revolution, bringing half the world's biomass and most of its biodiversity into analytical view for the first time; indeed, it feels almost like the invention of the microscope and the discovery of the new world at the same time. With major microbial ecology research efforts accumulating prodigious quantities of sequence, protein, and metabolite data, we are now poised to address environmental microbial research at macro scales, and to begin to characterize and understand the dimensions of microbial biodiversity on the planet. What is currently impeding progress is the need for a framework within which the research community can develop, exchange and discuss predictive ecosystem models that describe the biodiversity and functional interactions. Such a framework must encompass data and metadata transparency and interoperation; data and results validation, curation, and search; application programming interfaces for modeling and analysis tools; and human and technical processes and services necessary to ensure broad adoption. Here we discuss the need for focused community interaction to augment and deepen established community efforts, beginning with the Genomic Standards Consortium (GSC), to create a science-driven strategic plan for a Genomic Software Institute (GSI). PMID:22675605

  10. Optimal production lot size and reorder point of a two-stage supply chain while random demand is sensitive with sales teams' initiatives

    NASA Astrophysics Data System (ADS)

    Sankar Sana, Shib

    2016-01-01

    The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.

  11. Analytical simulation of nonlinear response to seismic test excitations of HDR-VKL (Heissdampfreaktor-Versuchskreislauf) piping system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srinivasan, M.G.; Kot, C.A.; Mojtahed, M.

    The paper describes the analytical modeling, calculations, and results of the posttest nonlinear simulation of high-level seismic testing of the VKL piping system at the HDR Test Facility in Germany. One of the objectives of the tests was to evaluate analytical methods for calculating the nonlinear response of realistic piping systems subjected to high-level seismic excitation that would induce significant plastic deformation. Two out of the six different pipe-support configurations, (ranging from a stiff system with struts and snubbers to a very flexible system with practically no seismic supports), subjected to simulated earthquakes, were tested at very high levels. Themore » posttest nonlinear calculations cover the KWU configuration, a reasonably compliant system with only rigid struts. Responses for 800% safe-shutdown-earthquake loading were calculated using the NONPIPE code. The responses calculated with NONPIPE were found generally to have the same time trends as the measurements but contained under-, over-, and correct estimates of peak values, almost in equal proportions. The only exceptions were the peak strut forces, which were underestimated as a group. The scatter in the peak value estimate of displacements and strut forces was smaller than that for the strains. The possible reasons for the differences and the effort on further analysis are discussed.« less

  12. Analytical solutions to optimal underactuated spacecraft formation reconfiguration

    NASA Astrophysics Data System (ADS)

    Huang, Xu; Yan, Ye; Zhou, Yang

    2015-11-01

    Underactuated systems can generally be defined as systems with fewer number of control inputs than that of the degrees of freedom to be controlled. In this paper, analytical solutions to optimal underactuated spacecraft formation reconfiguration without either the radial or the in-track control are derived. By using a linear dynamical model of underactuated spacecraft formation in circular orbits, controllability analysis is conducted for either underactuated case. Indirect optimization methods based on the minimum principle are then introduced to generate analytical solutions to optimal open-loop underactuated reconfiguration problems. Both fixed and free final conditions constraints are considered for either underactuated case and comparisons between these two final conditions indicate that the optimal control strategies with free final conditions require less control efforts than those with the fixed ones. Meanwhile, closed-loop adaptive sliding mode controllers for both underactuated cases are designed to guarantee optimal trajectory tracking in the presence of unmatched external perturbations, linearization errors, and system uncertainties. The adaptation laws are designed via a Lyapunov-based method to ensure the overall stability of the closed-loop system. The explicit expressions of the terminal convergent regions of each system states have also been obtained. Numerical simulations demonstrate the validity and feasibility of the proposed open-loop and closed-loop control schemes for optimal underactuated spacecraft formation reconfiguration in circular orbits.

  13. Exploratory Bifactor Analysis: The Schmid-Leiman Orthogonalization and Jennrich-Bentler Analytic Rotations

    PubMed Central

    Mansolf, Maxwell; Reise, Steven P.

    2017-01-01

    Analytic bifactor rotations (Jennrich & Bentler, 2011, 2012) have been recently developed and made generally available, but are not well understood. The Jennrich-Bentler analytic bifactor rotations (bi-quartimin and bi-geomin) are an alternative to, and arguably an improvement upon, the less technically sophisticated Schmid-Leiman orthogonalization (Schmid & Leiman, 1957). We review the technical details that underlie the Schmid-Leiman and Jennrich-Bentler bifactor rotations, using simulated data structures to illustrate important features and limitations. For the Schmid-Leiman, we review the problem of inaccurate parameter estimates caused by the linear dependencies, sometimes called “proportionality constraints,” that are required to expand a p correlated factors solution into a (p+1) (bi)factor space. We also review the complexities involved when the data depart from perfect cluster structure (e.g., item cross-loading on group factors). For the Jennrich-Bentler rotations, we describe problems in parameter estimation caused by departures from perfect cluster structure. In addition, we illustrate the related problems of: (a) solutions that are not invariant under different starting values (i.e., local minima problems); and, (b) group factors collapsing onto the general factor. Recommendations are made for substantive researchers including examining all local minima and applying multiple exploratory techniques in an effort to identify an accurate model. PMID:27612521

  14. Development of an integrated BEM approach for hot fluid structure interaction: BEST-FSI: Boundary Element Solution Technique for Fluid Structure Interaction

    NASA Technical Reports Server (NTRS)

    Dargush, G. F.; Banerjee, P. K.; Shi, Y.

    1992-01-01

    As part of the continuing effort at NASA LeRC to improve both the durability and reliability of hot section Earth-to-orbit engine components, significant enhancements must be made in existing finite element and finite difference methods, and advanced techniques, such as the boundary element method (BEM), must be explored. The BEM was chosen as the basic analysis tool because the critical variables (temperature, flux, displacement, and traction) can be very precisely determined with a boundary-based discretization scheme. Additionally, model preparation is considerably simplified compared to the more familiar domain-based methods. Furthermore, the hyperbolic character of high speed flow is captured through the use of an analytical fundamental solution, eliminating the dependence of the solution on the discretization pattern. The price that must be paid in order to realize these advantages is that any BEM formulation requires a considerable amount of analytical work, which is typically absent in the other numerical methods. All of the research accomplishments of a multi-year program aimed toward the development of a boundary element formulation for the study of hot fluid-structure interaction in Earth-to-orbit engine hot section components are detailed. Most of the effort was directed toward the examination of fluid flow, since BEM's for fluids are at a much less developed state. However, significant strides were made, not only in the analysis of thermoviscous fluids, but also in the solution of the fluid-structure interaction problem.

  15. Environmental and High-Strain Rate effects on composites for engine applications

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Smith, G. T.

    1982-01-01

    The Lewis Research Center is conducting a series of programs intended to investigate and develop the application of composite materials to structural components for turbojet engines. A significant part of that effort is directed to establishing resistance, defect growth, and strain rate characteristics of composite materials over the wide range of environmental and load conditions found in commercial turbojet engine operations. Both analytical and experimental efforts are involved.

  16. Study of lubricant circulation in HVAC systems. Volume 1: Description of technical effort and results; Final technical report, March 1995--April 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biancardi, F.R.; Michels, H.H.; Sienel, T.H.

    1996-10-01

    The purpose of this program was to conduct experimental and analytical efforts to determine lubricant circulation characteristics of new HFC/POE pairs and HFC/mineral oil pairs in a representative central residential HVAC system and to compare their behavior with the traditional HCFC-22/mineral oil (refrigerant/lubricant) pair. A dynamic test facility was designed and built to conduct the experimental efforts. This facility provided a unique capability to visually and physically measure oil circulation rates, on-line, in operating systems. A unique on-line ultraviolet-based measurement device was used to obtain detailed data on the rate and level of lubricant oil circulated within the operating heatmore » pump system. The experimental and analytical data developed during the program are presented as a function of vapor velocity, refrigerant/lubricant viscosity, system features and equipment. Both visual observations and instrumentation were used to understand ``worst case`` oil circulation situations. This report is presented in two volumes. Volume 1 contains a complete description of the program scope, objective, test results summary, conclusions, description of test facility and recommendations for future effort. Volume 2 contains all of the program test data essentially as taken from the laboratory dynamic test facility during the sequence of runs.« less

  17. Recent Discoveries and Future Challenges in Atmospheric Organic Chemistry.

    PubMed

    Glasius, Marianne; Goldstein, Allen H

    2016-03-15

    Earth's atmosphere contains a multitude of organic compounds, which differ by orders of magnitude regarding fundamental properties such as volatility, reactivity, and propensity to form cloud droplets, affecting their impact on global climate and human health. Despite recent major research efforts and advances, there are still substantial gaps in understanding of atmospheric organic chemistry, hampering efforts to understand, model, and mitigate environmental problems such as aerosol formation in both polluted urban and more pristine regions. The analytical toolbox available for chemists to study atmospheric organic components has expanded considerably during the past decade, opening new windows into speciation, time resolution and detection of reactive and semivolatile compounds at low concentrations. This has provided unprecedented opportunities, but also unveiled new scientific challenges. Specific groundbreaking examples include the role of epoxides in aerosol formation especially from isoprene, the importance of highly oxidized, reactive organics in air-surface processes (whether atmosphere-biosphere exchange or aerosols), as well as the extent of interactions of anthropogenic and biogenic emissions and the resulting impact on atmospheric organic chemistry.

  18. Toward a Predictive Understanding of Earth’s Microbiomes to Address 21st Century Challenges

    PubMed Central

    Blaser, Martin J.; Cardon, Zoe G.; Cho, Mildred K.; Dangl, Jeffrey L.; Green, Jessica L.; Knight, Rob; Maxon, Mary E.; Northen, Trent R.; Pollard, Katherine S.

    2016-01-01

    ABSTRACT Microorganisms have shaped our planet and its inhabitants for over 3.5 billion years. Humankind has had a profound influence on the biosphere, manifested as global climate and land use changes, and extensive urbanization in response to a growing population. The challenges we face to supply food, energy, and clean water while maintaining and improving the health of our population and ecosystems are significant. Given the extensive influence of microorganisms across our biosphere, we propose that a coordinated, cross-disciplinary effort is required to understand, predict, and harness microbiome function. From the parallelization of gene function testing to precision manipulation of genes, communities, and model ecosystems and development of novel analytical and simulation approaches, we outline strategies to move microbiome research into an era of causality. These efforts will improve prediction of ecosystem response and enable the development of new, responsible, microbiome-based solutions to significant challenges of our time. PMID:27178263

  19. Toward a Predictive Understanding of Earth's Microbiomes to Address 21st Century Challenges.

    PubMed

    Blaser, Martin J; Cardon, Zoe G; Cho, Mildred K; Dangl, Jeffrey L; Donohue, Timothy J; Green, Jessica L; Knight, Rob; Maxon, Mary E; Northen, Trent R; Pollard, Katherine S; Brodie, Eoin L

    2016-05-13

    Microorganisms have shaped our planet and its inhabitants for over 3.5 billion years. Humankind has had a profound influence on the biosphere, manifested as global climate and land use changes, and extensive urbanization in response to a growing population. The challenges we face to supply food, energy, and clean water while maintaining and improving the health of our population and ecosystems are significant. Given the extensive influence of microorganisms across our biosphere, we propose that a coordinated, cross-disciplinary effort is required to understand, predict, and harness microbiome function. From the parallelization of gene function testing to precision manipulation of genes, communities, and model ecosystems and development of novel analytical and simulation approaches, we outline strategies to move microbiome research into an era of causality. These efforts will improve prediction of ecosystem response and enable the development of new, responsible, microbiome-based solutions to significant challenges of our time. Copyright © 2016 Blaser et al.

  20. The Influence of Dynamic Contact Angle on Wetting Dynamics

    NASA Technical Reports Server (NTRS)

    Rame, Enrique; Garoff, Steven

    2005-01-01

    When surface tension forces dominate, and regardless of whether the situation is static or dynamic, the contact angle (the angle the interface between two immiscible fluids makes when it contacts a solid) is the key parameter that determines the shape of a fluid-fluid interface. The static contact angle is easy to measure and implement in models predicting static capillary surface shapes and such associated quantities as pressure drops. By contrast, when the interface moves relative to the solid (as in dynamic wetting processes) the dynamic contact angle is not identified unambiguously because it depends on the geometry of the system Consequently, its determination becomes problematic and measurements in one geometry cannot be applied in another for prediction purposes. However, knowing how to measure and use the dynamic contact angle is crucial to determine such dynamics as a microsystem throughput reliably. In this talk we will present experimental and analytical efforts aimed at resolving modeling issues present in dynamic wetting. We will review experiments that show the inadequacy of the usual hydrodynamic model when a fluid-fluid meniscus moves over a solid surface such as the wall of a small tube or duct. We will then present analytical results that show how to parametrize these problems in a predictive manner. We will illustrate these ideas by showing how to implement the method in numerical fluid mechanical calculations.

  1. INCAS: an analytical model to describe displacement cascades

    NASA Astrophysics Data System (ADS)

    Jumel, Stéphanie; Claude Van-Duysen, Jean

    2004-07-01

    REVE (REactor for Virtual Experiments) is an international project aimed at developing tools to simulate neutron irradiation effects in Light Water Reactor materials (Fe, Ni or Zr-based alloys). One of the important steps of the project is to characterise the displacement cascades induced by neutrons. Accordingly, the Department of Material Studies of Electricité de France developed an analytical model based on the binary collision approximation. This model, called INCAS (INtegration of CAScades), was devised to be applied on pure elements; however, it can also be used on diluted alloys (reactor pressure vessel steels, etc.) or alloys composed of atoms with close atomic numbers (stainless steels, etc.). INCAS describes displacement cascades by taking into account the nuclear collisions and electronic interactions undergone by the moving atoms. In particular, it enables to determine the mean number of sub-cascades induced by a PKA (depending on its energy) as well as the mean energy dissipated in each of them. The experimental validation of INCAS requires a large effort and could not be carried out in the framework of the study. However, it was verified that INCAS results are in conformity with those obtained from other approaches. As a first application, INCAS was applied to determine the sub-cascade spectrum induced in iron by the neutron spectrum corresponding to the central channel of the High Flux Irradiation Reactor of Oak Ridge National Laboratory.

  2. Review of methodological and experimental LIBS techniques for coal analysis and their application in power plants in China

    NASA Astrophysics Data System (ADS)

    Zhao, Yang; Zhang, Lei; Zhao, Shu-Xia; Li, Yu-Fang; Gong, Yao; Dong, Lei; Ma, Wei-Guang; Yin, Wang-Bao; Yao, Shun-Chun; Lu, Ji-Dong; Xiao, Lian-Tuan; Jia, Suo-Tang

    2016-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an emerging analytical spectroscopy technique. This review presents the main recent developments in China regarding the implementation of LIBS for coal analysis. The paper mainly focuses on the progress of the past few years in the fundamentals, data pretreatment, calibration model, and experimental issues of LIBS and its application to coal analysis. Many important domestic studies focusing on coal quality analysis have been conducted. For example, a proposed novel hybrid quantification model can provide more reproducible quantitative analytical results; the model obtained the average absolute errors (AREs) of 0.42%, 0.05%, 0.07%, and 0.17% for carbon, hydrogen, volatiles, and ash, respectively, and a heat value of 0.07 MJ/kg. Atomic/ionic emission lines and molecular bands, such as CN and C2, have been employed to generate more accurate analysis results, achieving an ARE of 0.26% and a 0.16% limit of detection (LOD) for the prediction of unburned carbon in fly ashes. Both laboratory and on-line LIBS apparatuses have been developed for field application in coal-fired power plants. We consider that both the accuracy and the repeatability of the elemental and proximate analysis of coal have increased significantly and further efforts will be devoted to realizing large-scale commercialization of coal quality analyzer in China.

  3. Tennessee long-range transportation plan : project evaluation system

    DOT National Transportation Integrated Search

    2005-12-01

    The Project Evaluation System (PES) Report is an analytical methodology to aid programming efforts and prioritize multimodal investments. The methodology consists of both quantitative and qualitative evaluation criteria built upon the Guiding Princip...

  4. Introduction: evaluation in analytic theory and political practice.

    PubMed

    Brown, Lawrence D; Gusmano, Michael K

    2013-12-01

    The development of professional policy analysis was driven by a desire to apply "science" to policy decisions, but the vision of apolitical policy analysis is as unattainable today as it was at the inception of the field. While there is powerful evidence that schemes to "get around" politics are futile, they never seem to lose their popularity. The contemporary enthusiasm for health technology assessment and comparative-effectiveness research extends these efforts to find technical, bureaucratic fixes to the problem of health care costs. As the benefits and costs of health care continue to grow, so too will the search for analytic evidence and insights. It is important to recognize that the goal of these efforts should not be to eliminate but rather to enrich political deliberations that govern what societies pay for and get from their health care systems.

  5. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  6. Generalized model of electromigration with 1:1 (analyte:selector) complexation stoichiometry: part I. Theory.

    PubMed

    Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav

    2015-03-06

    The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration

    NASA Technical Reports Server (NTRS)

    Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.

    1993-01-01

    Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.

  8. (U) Physics Validation of the RMI-Based Ejecta Source Model Implementation in FLAG: L2 Milestone #6035 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tregillis, I. L.

    The Los Alamos Physics and Engineering Models (PEM) program has developed a model for Richtmyer-Meshkov instability (RMI) based ejecta production from shock-melted surfaces, along with a prescription for a self-similar velocity distribution (SSVD) of the resulting ejecta particles. We have undertaken an effort to validate this source model using data from explosively driven tin coupon experiments. The model’s current formulation lacks a crucial piece of physics: a method for determining the duration of the ejecta production interval. Without a mechanism for terminating ejecta production, the model is not predictive. Furthermore, when the production interval is hand-tuned to match time-integrated massmore » data, the predicted time-dependent mass accumulation on a downstream sensor rises too sharply at early times and too slowly at late times because the SSVD overestimates the amount of mass stored in the fastest particles and underestimates the mass stored in the slowest particles. The functional form of the resulting m(t) is inconsistent with the available time-dependent data; numerical simulations and analytic studies agree on this point. Simulated mass tallies are highly sensitive to radial expansion of the ejecta cloud. It is not clear if the same effect is present in the experimental data but if so, depending on the degree, this may challenge the model’s compatibility with tin coupon data. The current implementation of the model in FLAG is sensitive to the detailed interaction between kinematics (hydrodynamic methods) and thermodynamics (material models); this sensitivity prohibits certain physics modeling choices. The appendices contain an extensive analytic study of piezoelectric ejecta mass measurements, along with test problems, excerpted from a longer work (LA-UR-17-21218).« less

  9. Early-Life Nutrition and Neurodevelopment: Use of the Piglet as a Translational Model12

    PubMed Central

    Mudd, Austin T

    2017-01-01

    Optimal nutrition early in life is critical to ensure proper structural and functional development of infant organ systems. Although pediatric nutrition historically has emphasized research on the relation between nutrition, growth rates, and gastrointestinal maturation, efforts increasingly have focused on how nutrition influences neurodevelopment. The provision of human milk is considered the gold standard in pediatric nutrition; thus, there is interest in understanding how functional nutrients and bioactive components in milk may modulate developmental processes. The piglet has emerged as an important translational model for studying neurodevelopmental outcomes influenced by pediatric nutrition. Given the comparable nutritional requirements and strikingly similar brain developmental patterns between young pigs and humans, the piglet is being used increasingly in developmental nutritional neuroscience studies. The piglet primarily has been used to assess the effects of dietary fatty acids and their accretion in the brain throughout neurodevelopment. However, recent research indicates that other dietary components, including choline, iron, cholesterol, gangliosides, and sialic acid, among other compounds, also affect neurodevelopment in the pig model. Moreover, novel analytical techniques, including but not limited to MRI, behavioral assessments, and molecular quantification, allow for a more holistic understanding of how nutrition affects neurodevelopmental patterns. By combining early-life nutritional interventions with innovative analytical approaches, opportunities abound to quantify factors affecting neurodevelopmental trajectories in the neonate. This review discusses research using the translational pig model with primary emphasis on early-life nutrition interventions assessing neurodevelopment outcomes, while also discussing nutritionally-sensitive methods to characterize brain maturation. PMID:28096130

  10. Note: Model identification and analysis of bivalent analyte surface plasmon resonance data.

    PubMed

    Tiwari, Purushottam Babu; Üren, Aykut; He, Jin; Darici, Yesim; Wang, Xuewen

    2015-10-01

    Surface plasmon resonance (SPR) is a widely used, affinity based, label-free biophysical technique to investigate biomolecular interactions. The extraction of rate constants requires accurate identification of the particular binding model. The bivalent analyte model involves coupled non-linear differential equations. No clear procedure to identify the bivalent analyte mechanism has been established. In this report, we propose a unique signature for the bivalent analyte model. This signature can be used to distinguish the bivalent analyte model from other biphasic models. The proposed method is demonstrated using experimentally measured SPR sensorgrams.

  11. Selected field and analytical methods and analytical results in the Dutch Flats area, western Nebraska, 1995-99

    USGS Publications Warehouse

    Verstraeten, Ingrid M.; Steele, G.V.; Cannia, J.C.; Bohlke, J.K.; Kraemer, T.E.; Hitch, D.E.; Wilson, K.E.; Carnes, A.E.

    2001-01-01

    A study of the water resources of the Dutch Flats area in the western part of the North Platte Natural Resources District, western Nebraska, was conducted from 1995 through 1999 to describe the surface water and hydrogeology, the spatial distribution of selected water-quality constituents in surface and ground water, and the surface-water/ground-water interaction in selected areas. This report describes the selected field and analytical methods used in the study and selected analytical results from the study not previously published. Specifically, dissolved gases, age-dating data, and other isotopes collected as part of an intensive sampling effort in August and November 1998 and all uranium and uranium isotope data collected through the course of this study are included in the report.

  12. Extending Climate Analytics as a Service to the Earth System Grid Federation Progress Report on the Reanalysis Ensemble Service

    NASA Astrophysics Data System (ADS)

    Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.

    2016-12-01

    We are extending climate analytics-as-a-service, including: (1) A high-performance Virtual Real-Time Analytics Testbed supporting six major reanalysis data sets using advanced technologies like the Cloudera Impala-based SQL and Hadoop-based MapReduce analytics over native NetCDF files. (2) A Reanalysis Ensemble Service (RES) that offers a basic set of commonly used operations over the reanalysis collections that are accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib. (3) An Open Geospatial Consortium (OGC) WPS-compliant Web service interface to CDSLib to accommodate ESGF's Web service endpoints. This presentation will report on the overall progress of this effort, with special attention to recent enhancements that have been made to the Reanalysis Ensemble Service, including the following: - An CDSlib Python library that supports full temporal, spatial, and grid-based resolution services - A new reanalysis collections reference model to enable operator design and implementation - An enhanced library of sample queries to demonstrate and develop use case scenarios - Extended operators that enable single- and multiple reanalysis area average, vertical average, re-gridding, and trend, climatology, and anomaly computations - Full support for the MERRA-2 reanalysis and the initial integration of two additional reanalyses - A prototype Jupyter notebook-based distribution mechanism that combines CDSlib documentation with interactive use case scenarios and personalized project management - Prototyped uncertainty quantification services that combine ensemble products with comparative observational products - Convenient, one-stop shopping for commonly used data products from multiple reanalyses, including basic subsetting and arithmetic operations over the data and extractions of trends, climatologies, and anomalies - The ability to compute and visualize multiple reanalysis intercomparisons

  13. When can efforts to control nuisance and invasive species backfire?

    USGS Publications Warehouse

    Zipkin, E.F.; Kraft, C.E.; Cooch, E.G.; Sullivan, P.J.

    2009-01-01

    Population control through harvest has the potential to reduce the abundance of nuisance and invasive species. However, demographic structure and density-dependent processes can confound removal efforts and lead to undesirable consequences, such as overcompensation (an increase in abundance in response to harvest) and instability (population cycling or chaos). Recent empirical studies have demonstrated the potential for increased mortality (such as that caused by harvest) to lead to overcompensation and instability in plant, insect, and fish populations. We developed a general population model with juvenile and adult stages to help determine the conditions under which control harvest efforts can produce unintended outcomes. Analytical and simulation analyses of the model demonstrated that the potential for overcompensation as a result of harvest was significant for species with high fecundity, even when annual stage-specific survivorship values were fairly low. Population instability as a result of harvest occurred less frequently and was only possible with harvest strategies that targeted adults when both fecundity and adult survivorship were high. We considered these results in conjunction with current literature on nuisance and invasive species to propose general guidelines for assessing the risks associated with control harvest based on life history characteristics of target populations. Our results suggest that species with high per capita fecundity (over discrete breeding periods), short juvenile stages, and fairly constant survivorship rates are most likely to respond undesirably to harvest. It is difficult to determine the extent to which overcompensation and instability could occur during real-world removal efforts, and more empirical removal studies should be undertaken to evaluate population-level responses to control harvests. Nevertheless, our results identify key issues that have been seldom acknowledged and are potentially generic across taxa. ?? 2009 by the Ecological Society ot America.

  14. When can efforts to control nuisance and invasive species backfire?

    PubMed

    Zipkin, Elise F; Kraft, Clifford E; Cooch, Evan G; Sullivan, Patrick J

    2009-09-01

    Population control through harvest has the potential to reduce the abundance of nuisance and invasive species. However, demographic structure and density-dependent processes can confound removal efforts and lead to undesirable consequences, such as overcompensation (an increase in abundance in response to harvest) and instability (population cycling or chaos). Recent empirical studies have demonstrated the potential for increased mortality (such as that caused by harvest) to lead to overcompensation and instability in plant, insect, and fish populations. We developed a general population model with juvenile and adult stages to help determine the conditions under which control harvest efforts can produce unintended outcomes. Analytical and simulation analyses of the model demonstrated that the potential for overcompensation as a result of harvest was significant for species with high fecundity, even when annual stage-specific survivorship values were fairly low. Population instability as a result of harvest occurred less frequently and was only possible with harvest strategies that targeted adults when both fecundity and adult survivorship were high. We considered these results in conjunction with current literature on nuisance and invasive species to propose general guidelines for assessing the risks associated with control harvest based on life history characteristics of target populations. Our results suggest that species with high per capita fecundity (over discrete breeding periods), short juvenile stages, and fairly constant survivorship rates are most likely to respond undesirably to harvest. It is difficult to determine the extent to which overcompensation and instability could occur during real-world removal efforts, and more empirical removal studies should be undertaken to evaluate population-level responses to control harvests. Nevertheless, our results identify key issues that have been seldom acknowledged and are potentially generic across taxa.

  15. An Investigation of Reliability Models for Ceramic Matrix Composites and their Implementation into Finite Element Codes

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.

    1998-01-01

    The development of modeling approaches for the failure analysis of ceramic-based material systems used in high temperature environments was the primary objective of this research effort. These materials have the potential to support many key engineering technologies related to the design of aeropropulsion systems. Monolithic ceramics exhibit a number of useful properties such as retention of strength at high temperatures, chemical inertness, and low density. However, the use of monolithic ceramics has been limited by their inherent brittleness and a large variation in strength. This behavior has motivated material scientists to reinforce the monolithic material with a ceramic fiber. The addition of a second ceramic phase with an optimized interface increases toughness and marginally increases strength. The primary purpose of the fiber is to arrest crack growth, not to increase strength. The material systems of interest in this research effort were laminated ceramic matrix composites, as well as two- and three- dimensional fabric reinforced ceramic composites. These emerging composite systems can compete with metals in many demanding applications. However, the ongoing metamorphosis of ceramic composite material systems, and the lack of standardized design data has in the past tended to minimize research efforts related to structural analysis. Many structural components fabricated from ceramic matrix composites (CMC) have been designed by "trial and error." The justification for this approach lies in the fact that during the initial developmental phases for a material system fabrication issues are paramount. Emphasis is placed on demonstrating feasibility rather than fully understanding the processes controlling mechanical behavior. This is understandable during periods of rapid improvements in material properties for any composite system. But to avoid the ad hoc approach, the analytical methods developed under this effort can be used to develop rational structural design protocols.

  16. Translation of proteomic biomarkers into FDA approved cancer diagnostics: issues and challenges

    PubMed Central

    2013-01-01

    Tremendous efforts have been made over the past few decades to discover novel cancer biomarkers for use in clinical practice. However, a striking discrepancy exists between the effort directed toward biomarker discovery and the number of markers that make it into clinical practice. One of the confounding issues in translating a novel discovery into clinical practice is that quite often the scientists working on biomarker discovery have limited knowledge of the analytical, diagnostic, and regulatory requirements for a clinical assay. This review provides an introduction to such considerations with the aim of generating more extensive discussion for study design, assay performance, and regulatory approval in the process of translating new proteomic biomarkers from discovery into cancer diagnostics. We first describe the analytical requirements for a robust clinical biomarker assay, including concepts of precision, trueness, specificity and analytical interference, and carryover. We next introduce the clinical considerations of diagnostic accuracy, receiver operating characteristic analysis, positive and negative predictive values, and clinical utility. We finish the review by describing components of the FDA approval process for protein-based biomarkers, including classification of biomarker assays as medical devices, analytical and clinical performance requirements, and the approval process workflow. While we recognize that the road from biomarker discovery, validation, and regulatory approval to the translation into the clinical setting could be long and difficult, the reward for patients, clinicians and scientists could be rather significant. PMID:24088261

  17. Estimation of real-time runway surface contamination using flight data recorder parameters

    NASA Astrophysics Data System (ADS)

    Curry, Donovan

    Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.

  18. Value of Earth Observations: Key principles and techniques of socioeconomic benefits analysis (Invited)

    NASA Astrophysics Data System (ADS)

    Friedl, L.; Macauley, M.; Bernknopf, R.

    2013-12-01

    Internationally, multiple organizations are placing greater emphasis on the societal benefits that governments, businesses, and NGOs can derive from applications of Earth-observing satellite observations, research, and models. A growing set of qualitative, anecdotal examples on the uses of Earth observations across a range of sectors can be complemented by the quantitative substantiation of the socioeconomic benefits. In turn, the expanding breadth of environmental data available and the awareness of their beneficial applications to inform decisions can support new products and services by companies, agencies, and civil society. There are, however, significant efforts needed to bridge the Earth sciences and social and economic sciences fields to build capacity, develop case studies, and refine analytic techniques in quantifying socioeconomic benefits from the use of Earth observations. Some government programs, such as the NASA Earth Science Division's Applied Sciences Program have initiated activities in recent years to quantify the socioeconomic benefits from applications of Earth observations research, and to develop multidisciplinary models for organizations' decision-making activities. A community of practice has conducted workshops, developed impact analysis reports, published a book, developed a primer, and pursued other activities to advance analytic methodologies and build capacity. This paper will present an overview of measuring socioeconomic impacts of Earth observations and how the measures can be translated into a value of Earth observation information. It will address key terms, techniques, principles and applications of socioeconomic impact analyses. It will also discuss activities to pursue a research agenda on analytic techniques, develop a body of knowledge, and promote broader skills and capabilities.

  19. Improved partition equilibrium model for predicting analyte response in electrospray ionization mass spectrometry.

    PubMed

    Du, Lihong; White, Robert L

    2009-02-01

    A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.

  20. NASA/University JOint VEnture (JOVE) Program: Transverse Shear Moduli Using the Torsional Responses of Rectangular Laminates

    NASA Technical Reports Server (NTRS)

    Bogan, Sam

    2001-01-01

    The first year included a study of the non-visible damage of composite overwrapped pressure vessels with B. Poe of the Materials Branch of Nasa-Langley. Early determinations showed a clear reduction in non-visible damage for thin COPVs when partially pressurized rather than unpressurized. Literature searches on Thicker-wall COPVs revealed surface damage but clearly visible. Analysis of current Analytic modeling indicated that that current COPV models lacked sufficient thickness corrections to predict impact damage. After a comprehensive study of available published data and numerous numerical studies based on observed data from Langley, the analytic framework for modeling the behavior was determined lacking and both Poe and Bogan suggested any short term (3yr) result for Jove would be overly ambitious and emphasis should be placed on transverse shear moduli studies. Transverse shear moduli determination is relevant to the study of fatigue, fracture and aging effects in composite structures. Based on the techniques developed by Daniel & Tsai, Bogan and Gates determined to verify the results for K3B and 8320. A detailed analytic and experimental plan was established and carried out that included variations in layup, width, thickness, and length. As well as loading rate variations to determine effects and relaxation moduli. The additional axial loads during the torsion testing were studied as was the placement of gages along the composite specimen. Of the proposed tasks, all of tasks I and 2 were completed with presentations given at Langley, SEM conferences and ASME/AIAA conferences. Sensitivity issues with the technique associated with the use of servohydraulic test systems for applying the torsional load to the composite specimen limited the torsion range for predictable and repeatable transverse shear properties. Bogan and Gates determined to diverge on research efforts with Gates continuing the experimental testing at Langley and Bogan modeling the apparent non-linear behavior at low torque & angles apparent from the tests.

  1. Improving a complex finite-difference ground water flow model through the use of an analytic element screening model

    USGS Publications Warehouse

    Hunt, R.J.; Anderson, M.P.; Kelson, V.A.

    1998-01-01

    This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.

  2. Solar thermal storage applications program

    NASA Astrophysics Data System (ADS)

    Peila, W. C.

    1982-12-01

    The efforts of the Storage Applications Program are reviewed. The program concentrated on the investigation of storage media and evaluation of storage methods. Extensive effort was given to experimental and analytical investigations of nitrate salts. Two tasks are the preliminary design of a 1200 MW/sub th/ system and the design, construction, operation, and evaluation of a subsystem research experiment, which utilized the same design. Some preliminary conclusions drawn from the subsystem research experiment are given.

  3. ANALYTICAL CHALLENGES OF ENVIRONMENTAL ENDOCRINE DISRUPTOR MONITORING

    EPA Science Inventory

    Reported increases in the incidence of endocrine-related conditions have led to speculation about environmental causes. Environmental scientists are focusing increased research effort into understanding the mechanisms by which endocrine disruptors affect human and ecological h...

  4. Nonlinear Unsteady Aerodynamic Modeling Using Wind Tunnel and Computational Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.

    2016-01-01

    Extensions to conventional aircraft aerodynamic models are required to adequately predict responses when nonlinear unsteady flight regimes are encountered, especially at high incidence angles and under maneuvering conditions. For a number of reasons, such as loss of control, both military and civilian aircraft may extend beyond normal and benign aerodynamic flight conditions. In addition, military applications may require controlled flight beyond the normal envelope, and civilian flight may require adequate recovery or prevention methods from these adverse conditions. These requirements have led to the development of more general aerodynamic modeling methods and provided impetus for researchers to improve both techniques and the degree of collaboration between analytical and experimental research efforts. In addition to more general mathematical model structures, dynamic test methods have been designed to provide sufficient information to allow model identification. This paper summarizes research to develop a modeling methodology appropriate for modeling aircraft aerodynamics that include nonlinear unsteady behaviors using both experimental and computational test methods. This work was done at Langley Research Center, primarily under the NASA Aviation Safety Program, to address aircraft loss of control, prevention, and recovery aerodynamics.

  5. WHAEM: PROGRAM DOCUMENTATION FOR THE WELLHEAD ANALYTIC ELEMENT MODEL

    EPA Science Inventory

    The Wellhead Analytic Element Model (WhAEM) demonstrates a new technique for the definition of time-of-travel capture zones in relatively simple geohydrologic settings. he WhAEM package includes an analytic element model that uses superposition of (many) analytic solutions to gen...

  6. Expanded envelope concepts for aircraft control-element failure detection and identification

    NASA Technical Reports Server (NTRS)

    Weiss, Jerold L.; Hsu, John Y.

    1988-01-01

    The purpose of this effort was to develop and demonstrate concepts for expanding the envelope of failure detection and isolation (FDI) algorithms for aircraft-path failures. An algorithm which uses analytic-redundancy in the form of aerodynamic force and moment balance equations was used. Because aircraft-path FDI uses analytical models, there is a tradeoff between accuracy and the ability to detect and isolate failures. For single flight condition operation, design and analysis methods are developed to deal with this robustness problem. When the departure from the single flight condition is significant, algorithm adaptation is necessary. Adaptation requirements for the residual generation portion of the FDI algorithm are interpreted as the need for accurate, large-motion aero-models, over a broad range of velocity and altitude conditions. For the decision-making part of the algorithm, adaptation may require modifications to filtering operations, thresholds, and projection vectors that define the various hypothesis tests performed in the decision mechanism. Methods of obtaining and evaluating adequate residual generation and decision-making designs have been developed. The application of the residual generation ideas to a high-performance fighter is demonstrated by developing adaptive residuals for the AFTI-F-16 and simulating their behavior under a variety of maneuvers using the results of a NASA F-16 simulation.

  7. Automatic estimation of aquifer parameters using long-term water supply pumping and injection records

    NASA Astrophysics Data System (ADS)

    Luo, Ning; Illman, Walter A.

    2016-09-01

    Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity ( T) and storativity ( S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.

  8. ANALYTICAL PLANS SUPPORTING THE SWPF GAP ANALYSIS BEING CONDUCTED WITH ENERGYSOLUTIONS AND THE VITREOUS STATE LABORATORY AT THE CUA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, T.; Peeler, D.

    2014-10-28

    EnergySolutions (ES) and its partner, the Vitreous State Laboratory (VSL) of The Catholic University of America (CUA), are to provide engineering and technical services support to Savannah River Remediation, LLC (SRR) for ongoing operation of the Defense Waste Processing Facility (DWPF) flowsheet as well as for modifications to improve overall plant performance. SRR has requested that the glass formulation team of Savannah River National Laboratory (SRNL) and ES-VSL develop a technical basis that validates the current Product Composition Control System models for use during the processing of the coupled flowsheet or that leads to the refinements of or modifications tomore » the models that are needed so that they may be used during the processing of the coupled flowsheet. SRNL has developed a matrix of test glasses that are to be batched and fabricated by ES-VSL as part of this effort. This document provides two analytical plans for use by ES-VSL: one plan is to guide the measurement of the chemical composition of the study glasses while the second is to guide the measurement of the durability of the study glasses based upon the results of testing by ASTM’s Product Consistency Test (PCT) Method A.« less

  9. Cloud-Enabled Climate Analytics-as-a-Service using Reanalysis data: A case study.

    NASA Astrophysics Data System (ADS)

    Nadeau, D.; Duffy, D.; Schnase, J. L.; McInerney, M.; Tamkin, G.; Potter, G. L.; Thompson, J. H.

    2014-12-01

    The NASA Center for Climate Simulation (NCCS) maintains advanced data capabilities and facilities that allow researchers to access the enormous volume of data generated by weather and climate models. The NASA Climate Model Data Service (CDS) and the NCCS are merging their efforts to provide Climate Analytics-as-a-Service for the comparative study of the major reanalysis projects: ECMWF ERA-Interim, NASA/GMAO MERRA, NOAA/NCEP CFSR, NOAA/ESRL 20CR, JMA JRA25, and JRA55. These reanalyses have been repackaged to netCDF4 file format following the CMIP5 Climate and Forecast (CF) metadata convention prior to be sequenced into the Hadoop Distributed File System ( HDFS ). A small set of operations that represent a common starting point in many analysis workflows was then created: min, max, sum, count, variance and average. In this example, Reanalysis data exploration was performed with the use of Hadoop MapReduce and accessibility was achieved using the Climate Data Service(CDS) application programming interface (API) created at NCCS. This API provides a uniform treatment of large amount of data. In this case study, we have limited our exploration to 2 variables, temperature and precipitation, using 3 operations, min, max and avg and using 30-year of Reanalysis data for 3 regions of the world: global, polar, subtropical.

  10. Financial Forecasting and Stochastic Modeling: Predicting the Impact of Business Decisions.

    PubMed

    Rubin, Geoffrey D; Patel, Bhavik N

    2017-05-01

    In health care organizations, effective investment of precious resources is critical to assure that the organization delivers high-quality and sustainable patient care within a supportive environment for patients, their families, and the health care providers. This holds true for organizations independent of size, from small practices to large health systems. For radiologists whose role is to oversee the delivery of imaging services and the interpretation, communication, and curation of imaging-informed information, business decisions influence where and how they practice, the tools available for image acquisition and interpretation, and ultimately their professional satisfaction. With so much at stake, physicians must understand and embrace the methods necessary to develop and interpret robust financial analyses so they effectively participate in and better understand decision making. This review discusses the financial drivers upon which health care organizations base investment decisions and the central role that stochastic financial modeling should play in support of strategically aligned capital investments. Given a health care industry that has been slow to embrace advanced financial analytics, a fundamental message of this review is that the skills and analytical tools are readily attainable and well worth the effort to implement in the interest of informed decision making. © RSNA, 2017 Online supplemental material is available for this article.

  11. Modeling human pilot cue utilization with applications to simulator fidelity assessment.

    PubMed

    Zeyada, Y; Hess, R A

    2000-01-01

    An analytical investigation to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator was undertaken. Data from a NASA Ames Research Center vertical motion simulator study of a simple, single-degree-of-freedom rotorcraft bob-up/down maneuver were employed in the investigation. The study was part of a larger research effort that has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system that included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle, and the motion system. With the exception of time delays that accrued in visual scene production in the simulator, visual scene effects were not included in this study. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity that occurred as the characteristics of the motion system were varied over five configurations. The data from three of the five pilots who participated in the experimental study were analyzed in the fuzzy-inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzy-inference identification can be used to identify changes in simulator fidelity for the task examined.

  12. Incentives for cooperation in quality improvement among hospitals--the impact of the reimbursement system.

    PubMed

    Kesteloot, K; Voet, N

    1998-12-01

    Up to now, few analytical models have studied the incentives for cooperation in quality improvements among hospitals. Only those dealing with reimbursement systems have shown that, from the point of view of individual or competing hospitals, retrospective reimbursement is more likely to encourage quality improvements than prospective financing, while the reverse holds for efficiency improvements. This paper studies the incentives to improve the quality of hospital care, in an analytical model, taking into account the possibility of cooperative agreements, price besides non-price (quality) competition and quality improvements that may simultaneously increase demand, increase or reduce costs and spill over to rival hospitals. In this setting quality improvement efforts rise with the rate of prospective reimbursement, while the impact of the rate of retrospective reimbursement is ambiguous, but likely to be negative for quality improvements that are highly cost-reducting and create large spillovers. Cooperation may lead to more or less quality improvement than non-cooperative conduct, depending on the magnitude of spillovers and the degree of product market competition, relative to the net effect of quality on profits and the share of costs that is reimbursed retrospectively. Finally, the stability of cooperative agreements, supported by grim trigger strategies, is shown to depend upon exactly the opposite interaction between these factors.

  13. Analysis and Modeling of a Two-Phase Jet Pump of a Flow Boiling Test Facility for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Sherif, S. A.; Steadham, Justin M.

    1996-01-01

    Jet pumps are devices capable of pumping fluids to a higher pressure employing a nozzle/diffuser/mixing chamber combination. A primary fluid is usually allowed to pass through a converging-diverging nozzle where it can accelerate to supersonic speeds at the nozzle exit. The relatively high kinetic energy that the primary fluid possesses at the nozzle exit is accompanied by a low pressure region in order to satisfy Bernoulli's equation. The low pressure region downstream of the nozzle exit permits a secondary fluid to be entrained into and mixed with the primary fluid in a mixing chamber located downstream of the nozzle. Several combinations may exist in terms of the nature of the primary and secondary fluids in so far as whether they are single or two-phase fluids. Depending on this, the jet pump may be classified as gas/gas, gas/liquid, liquid/liquid, two-phase/liquid, or similar combinations. The mixing chamber serves to create a homogeneous single-phase or two-phase mixture which enters a diffuser where the high kinetic energy of the fluid is converted into pressure energy. If the fluid mixture entering the diffuser is in the supersonic flow regime, a normal shock wave usually develops inside the diffuser. If the fluid mixture is one that can easily change phase, a condensation shock would normally develop. Because of the overall rise in pressure in the diffuser as well as the additional rise in pressure across the shock layer, condensation becomes more likely. Associated with the pressure rise across the shock is a velocity reduction from the supersonic to the subsonic range. If the two-phase flow entering the diffuser is predominantly gaseous with liquid droplets suspended in it, it will transform into a predominantly liquid flow containing gaseous bubbles (bubbly flow) somewhere in the diffuser. While past researchers have been able to model the two-phase flow jet pump using the one-dimensional assumption with no shock waves and no phase change, there is no research known to the authors apart from that of Anand (1992) which accounted for condensation shocks. One of the objectives of this research effort is to develop a comprehensive model in which the effects of phase slip and inter-phase heat transfer as well as the wall friction and shock waves are accounted for. While this modeling effort is predominantly analytical in nature and is primarily intended to provide a parametric understanding of the jet pump performance under different operating scenarios, another parallel effort employing a commercial CFD code is also implemented. The latter effort is primarily intended to model an axisymmetric counterpart of the problem in question. The viability of using the CFD code to model a two-phase flow jet pump will be assessed by attempting to recreate some of the existing performance data of similar jet pumps. The code will eventually be used to generate the jet pump performance characteristics of several scenarios involving jet pump geometries as well as flow regimes in order to be able to determine an optimum design which would be suitable for a two-phase flow boiling test facility at NASA-Marshall. Because of the extensive nature of the analytical model developed, the following section will only provide very brief highlights of it, while leaving the details to a more complete report submitted to the NASA colleague. This report will also contain some of the simulation results obtained using the CFD code.

  14. Simulating flight boundary conditions for orbiter payload modal survey

    NASA Technical Reports Server (NTRS)

    Chung, Y. T.; Sernaker, M. L.; Peebles, J. H.

    1993-01-01

    An approach to simulate the characteristics of the payload/orbiter interfaces for the payload modal survey was developed. The flexure designed for this approach is required to provide adequate stiffness separation in the free and constrained interface degrees of freedom to closely resemble the flight boundary condition. Payloads will behave linearly and demonstrate similar modal effective mass distribution and load path as the flight if the flexure fixture is used for the payload modal survey. The potential non-linearities caused by the trunnion slippage during the conventional fixed base modal survey may be eliminated. Consequently, the effort to correlate the test and analysis models can be significantly reduced. An example is given to illustrate the selection and the sensitivity of the flexure stiffness. The advantages of using flexure fixtures for the modal survey and for the analytical model verification are also demonstrated.

  15. Analysis and assessment of STES technologies

    NASA Astrophysics Data System (ADS)

    Brown, D. R.; Blahnik, D. E.; Huber, H. D.

    1982-12-01

    Technical and economic assessments completed in FY 1982 in support of the Seasonal Thermal Energy Storage (STES) segment of the Underground Energy Storage Program included: (1) a detailed economic investigation of the cost of heat storage in aquifers, (2) documentation for AQUASTOR, a computer model for analyzing aquifer thermal energy storage (ATES) coupled with district heating or cooling, and (3) a technical and economic evaluation of several ice storage concepts. This paper summarizes the research efforts and main results of each of these three activities. In addition, a detailed economic investigation of the cost of chill storage in aquifers is currently in progress. The work parallels that done for ATES heat storage with technical and economic assumptions being varied in a parametric analysis of the cost of ATES delivered chill. The computer model AQUASTOR is the principal analytical tool being employed.

  16. Photogrammetry Methodology Development for Gossamer Spacecraft Structures

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Jones, Thomas W.; Walford, Alan; Black, Jonathan T.; Robson, Stuart; Shortis, Mark R.

    2002-01-01

    Photogrammetry--the science of calculating 3D object coordinates from images-is a flexible and robust approach for measuring the static and dynamic characteristics of future ultralightweight and inflatable space structures (a.k.a., Gossamer structures), such as large membrane reflectors, solar sails, and thin-film solar arrays. Shape and dynamic measurements are required to validate new structural modeling techniques and corresponding analytical models for these unconventional systems. This paper summarizes experiences at NASA Langley Research Center over the past three years to develop or adapt photogrammetry methods for the specific problem of measuring Gossamer space structures. Turnkey industrial photogrammetry systems were not considered a cost-effective choice for this basic research effort because of their high purchase and maintenance costs. Instead, this research uses mainly off-the-shelf digital-camera and software technologies that are affordable to most organizations and provide acceptable accuracy.

  17. Interpretation of Ground Temperature Anomalies in Hydrothermal Discharge Areas

    NASA Astrophysics Data System (ADS)

    Price, A. N.; Lindsey, C.; Fairley, J. P., Jr.

    2017-12-01

    Researchers have long noted the potential for shallow hydrothermal fluids to perturb near-surface temperatures. Several investigators have made qualitative or semi-quantitative use of elevated surface temperatures; for example, in snowfall calorimetry, or for tracing subsurface flow paths. However, little effort has been expended to develop a quantitative framework connecting surface temperature observations with conditions in the subsurface. Here, we examine an area of shallow subsurface flow at Burgdorf Hot Springs, in the Payette National Forest, north of McCall, Idaho USA. We present a simple analytical model that uses easily-measured surface data to infer the temperatures of laterally-migrating shallow hydrothermal fluids. The model is calibrated using shallow ground temperature measurements and overburden thickness estimates from seismic refraction studies. The model predicts conditions in the shallow subsurface, and suggests that the Biot number may place a more important control on the expression of near-surface thermal perturbations than previously thought. In addition, our model may have application in inferring difficult-to-measure parameters, such as shallow subsurface discharge from hydrothermal springs.

  18. Ignition and combustion characteristics of metallized propellants

    NASA Technical Reports Server (NTRS)

    Mueller, D. C.; Turns, Stephen R.

    1991-01-01

    Over the past six months, experimental investigations were continued and theoretical work on the secondary atomization process was begun. Final shakedown of the sizing/velocity measuring system was completed and the aluminum combustion detection system was modified and tested. Atomizer operation was improved to allow steady state operation over long periods of time for several slurries. To validate the theoretical modeling, work involving carbon slurry atomization and combustion was begun and qualitative observations were made. Simultaneous measurements of aluminum slurry droplet size distributions and detection of burning aluminum particles were performed at several axial locations above the burner. The principle theoretical effort was the application of a rigid shell formation model to aluminum slurries and an investigation of the effects of various parameters on the shell formation process. This shell formation model was extended to include the process leading up to droplet disruption, and previously developed analytical models were applied to yield theoretical aluminum agglomerate ignition and combustion times. The several theoretical times were compared with the experimental results.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goltz, M.N.; Oxley, M.E.

    Aquifer cleanup efforts at contaminated sites frequently involve operation of a system of extraction wells. It has been found that contaminant load discharged by extraction wells typically declines with time, asymptotically approaching a residual level. Such behavior could be due to rate-limited desorption of an organic contaminant from aquifer solids. An analytical model is presented which accounts for rate-limited desorption of an organic solute during cleanup of a contaminated site. Model equations are presented which describe transport of a sorbing contaminant in a converging radial flow field, with sorption described by (1) equilibrium, (2) first-order rate, and (3) Fickian diffusionmore » expressions. The model equations are solved in the Laplace domain and numerically inverted to simulate contaminant concentrations at an extraction well. A Laplace domain solution for the total contaminant mass remaining in the aquifer is also derived. It is shown that rate-limited sorption can have a significant impact upon aquifer remediation. Approximate equivalence among the various rate-limited models is also demonstrated.« less

  20. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  1. Validation of the replica trick for simple models

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  2. CORSAIR Solar Energetic Particle Model

    NASA Astrophysics Data System (ADS)

    Sandroos, A.

    2013-05-01

    Acceleration of particles in coronal mass ejection (CME) driven shock waves is the most commonly accepted and best developed theory of the genesis of gradual solar energetic particle (SEP) events. The underlying acceleration mechanism is the diffusive shock acceleration (DSA). According to DSA, particles scatter from fluctuations present in the ambient magnetic field, which causes some particles to encounter the shock front repeatedly and to gain energy during each crossing. Currently STEREO and near-Earth spacecraft are providing valuable multi-point information on how SEP properties, such as composition and energy spectra, vary in longitude. Initial results have shown that longitude distributions of large CME-associated SEP events are much wider than reported in earlier studies. These findings have important consequences on SEP modeling. It is important to extend the present models into two or three spatial coordinates to properly take into account the effects of coronal and interplanetary (IP) magnetic geometry, and evolution of the CME and the associated shock, on the acceleration and transport of SEPs. We give a status update on CORSAIR project, which is an effort to develop a new self-consistent (total energy conserving) DSA acceleration model that is capable of modeling energetic particle acceleration and transport in IP space in two or three spatial dimensions. In the new model particles are propagated using guiding center approximation. Waves are modeled as (Lagrangian) wave packets propagating (anti)parallel to ambient magnetic field. Diffusion coefficients related to scattering from the waves are calculated using quasilinear theory. State of ambient plasma is obtained from an MHD simulation or by using idealized analytic models. CORSAIR is an extension to our earlier efforts to model the effects of magnetic geometry on SEP acceleration (Sandroos & Vainio, 2007,2009).

  3. High Altitude Ozone Research Balloon

    NASA Technical Reports Server (NTRS)

    Cauthen, Timothy A.; Daniel, Leslie A.; Herrick, Sally C.; Rock, Stacey G.; Varias, Michael A.

    1990-01-01

    In order to create a mission model of the high altitude ozone research balloon (HAORB) several options for flight preparation, altitude control, flight termination, and payload recovery were considered. After the optimal launch date and location for two separate HAORB flights were calculated, a method for reducing the heat transfer from solar and infrared radiation was designed and analytically tested. This provided the most important advantage of the HAORB over conventional balloons, i.e., its improved flight duration. Comparisons of different parachute configurations were made, and a design best suited for the HAORB's needs was determined to provide for payload recovery after flight termination. In an effort to avoid possible payload damage, a landing system was also developed.

  4. Implementing a Matrix-free Analytical Jacobian to Handle Nonlinearities in Models of 3D Lithospheric Deformation

    NASA Astrophysics Data System (ADS)

    Kaus, B.; Popov, A.

    2015-12-01

    The analytical expression for the Jacobian is a key component to achieve fast and robust convergence of the nonlinear Newton-Raphson iterative solver. Accomplishing this task in practice often requires a significant algebraic effort. Therefore it is quite common to use a cheap alternative instead, for example by approximating the Jacobian with a finite difference estimation. Despite its simplicity it is a relatively fragile and unreliable technique that is sensitive to the scaling of the residual and unknowns, as well as to the perturbation parameter selection. Unfortunately no universal rule can be applied to provide both a robust scaling and a perturbation. The approach we use here is to derive the analytical Jacobian for the coupled set of momentum, mass, and energy conservation equations together with the elasto-visco-plastic rheology and a marker in cell/staggered finite difference method. The software project LaMEM (Lithosphere and Mantle Evolution Model) is primarily developed for the thermo-mechanically coupled modeling of the 3D lithospheric deformation. The code is based on a staggered grid finite difference discretization in space, and uses customized scalable solvers form PETSc library to efficiently run on the massively parallel machines (such as IBM Blue Gene/Q). Currently LaMEM relies on the Jacobian-Free Newton-Krylov (JFNK) nonlinear solver, which approximates the Jacobian-vector product using a simple finite difference formula. This approach never requires an assembled Jacobian matrix and uses only the residual computation routine. We use an approximate Jacobian (Picard) matrix to precondition the Krylov solver with the Galerkin geometric multigrid. Because of the inherent problems of the finite difference Jacobian estimation, this approach doesn't always result in stable convergence. In this work we present and discuss a matrix-free technique in which the Jacobian-vector product is replaced by analytically-derived expressions and compare results with those obtained with a finite difference approximation of the Jacobian. This project is funded by ERC Starting Grant 258830 and computer facilities were provided by Jülich supercomputer center (Germany).

  5. Accuracy of trace element determinations in alternate fuels

    NASA Technical Reports Server (NTRS)

    Greenbauer-Seng, L. A.

    1980-01-01

    A review of the techniques used at Lewis Research Center (LeRC) in trace metals analysis is presented, including the results of Atomic Absorption Spectrometry and DC Arc Emission Spectrometry of blank levels and recovery experiments for several metals. The design of an Interlaboratory Study conducted by LeRC is presented. Several factors were investigated, including: laboratory, analytical technique, fuel type, concentration, and ashing additive. Conclusions drawn from the statistical analysis will help direct research efforts toward those areas most responsible for the poor interlaboratory analytical results.

  6. Quality assurance for health and environmental chemistry: 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gautier, M.A.; Gladney, E.S.; Koski, N.L.

    1991-10-01

    This report documents the continuing quality assurance efforts of the Health and Environmental Chemistry Group (HSE-9) at the Los Alamos National Laboratory. The philosophy, methodology, computing resources, and laboratory information management system used by the quality assurance program to encompass the diversity of analytical chemistry practiced in the group are described. Included in the report are all quality assurance reference materials used, along with their certified or consensus concentrations, and all analytical chemistry quality assurance measurements made by HSE-9 during 1990.

  7. Place-classification analysis of community vulnerability to near-field tsunami threats in the U.S. Pacific Northwest (Invited)

    NASA Astrophysics Data System (ADS)

    Wood, N. J.; Jones, J.; Spielman, S.

    2013-12-01

    Near-field tsunami hazards are credible threats to many coastal communities throughout the world. Along the U.S. Pacific Northwest coast, low-lying areas could be inundated by a series of catastrophic tsunami waves that begin to arrive in a matter of minutes following a Cascadia subduction zone (CSZ) earthquake. This presentation summarizes analytical efforts to classify communities with similar characteristics of community vulnerability to tsunami hazards. This work builds on past State-focused inventories of community exposure to CSZ-related tsunami hazards in northern California, Oregon, and Washington. Attributes used in the classification, or cluster analysis, include demography of residents, spatial extent of the developed footprint based on mid-resolution land cover data, distribution of the local workforce, and the number and type of public venues, dependent-care facilities, and community-support businesses. Population distributions also are characterized by a function of travel time to safety, based on anisotropic, path-distance, geospatial modeling. We used an unsupervised-model-based clustering algorithm and a v-fold, cross-validation procedure (v=50) to identify the appropriate number of community types. We selected class solutions that provided the appropriate balance between parsimony and model fit. The goal of the vulnerability classification is to provide emergency managers with a general sense of the types of communities in tsunami hazard zones based on similar characteristics instead of only providing an exhaustive list of attributes for individual communities. This classification scheme can be then used to target and prioritize risk-reduction efforts that address common issues across multiple communities. The presentation will include a discussion of the utility of proposed place classifications to support regional preparedness and outreach efforts.

  8. An analytic performance model of disk arrays and its application

    NASA Technical Reports Server (NTRS)

    Lee, Edward K.; Katz, Randy H.

    1991-01-01

    As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.

  9. Functionalization and Characterization of Nanomaterial Gated Field-Effect Transistor-Based Biosensors and the Design of a Multi-Analyte Implantable Biosensing Platform

    NASA Astrophysics Data System (ADS)

    Croce, Robert A., Jr.

    Advances in semiconductor research and complementary-metal-oxide semiconductor fabrication allow for the design and implementation of miniaturized metabolic monitoring systems, as well as advanced biosensor design. The first part of this dissertation will focus on the design and fabrication of nanomaterial (single-walled carbon nanotube and quantum dot) gated field-effect transistors configured as protein sensors. These novel device structures have been functionalized with single-stranded DNA aptamers, and have shown sensor operation towards the protein Thrombin. Such advanced transistor-based sensing schemes present considerable advantages over traditional sensing methodologies in view of its miniaturization, low cost, and facile fabrication, paving the way for the ultimate realization of a multi-analyte lab-on-chip. The second part of this dissertation focuses on the design and fabrication of a needle-implantable glucose sensing platform which is based solely on photovoltaic powering and optical communication. By employing these powering and communication schemes, this design negates the need for bulky on-chip RF-based transmitters and batteries in an effort to attain extreme miniaturization required for needle-implantable/extractable applications. A complete single-sensor system coupled with a miniaturized amperometric glucose sensor has been demonstrated to exhibit reality of this technology. Furthermore, an optical selection scheme of multiple potentiostats for four different analytes (glucose, lactate, O 2 and CO2) as well as the optical transmission of sensor data has been designed for multi-analyte applications. The last part of this dissertation will focus on the development of a computational model for the amperometric glucose sensors employed in the aforementioned implantable platform. This model has been applied to single-layer single-enzyme systems, as well as multi-layer (single enzyme) systems utilizing glucose flux limiting layer-by-layer assembled outer membranes. The concentration of glucose and hydrogen peroxide within the sensor geometry, the transient response and the device response time has been simulated for both systems.

  10. Generalized analytical solutions to multispecies transport equations with scale-dependent dispersion coefficients subject to time-dependent boundary conditions

    NASA Astrophysics Data System (ADS)

    Chen, J. S.; Chiang, S. Y.; Liang, C. P.

    2017-12-01

    It is essential to develop multispecies transport analytical models based on a set of advection-dispersion equations (ADEs) coupled with sequential first-order decay reactions for the synchronous prediction of plume migrations of both parent and its daughter species of decaying contaminants such as radionuclides, dissolved chlorinated organic compounds, pesticides and nitrogen. Although several analytical models for multispecies transport have already been reported, those currently available in the literature have primarily been derived based on ADEs with constant dispersion coefficients. However, there have been a number of studies demonstrating that the dispersion coefficients increase with the solute travel distance as a consequence of variation in the hydraulic properties of the porous media. This study presents novel analytical models for multispecies transport with distance-dependent dispersion coefficients. The correctness of the derived analytical models is confirmed by comparing them against the numerical models. Results show perfect agreement between the analytical and numerical models. Comparison of our new analytical model for multispecies transport with scale-dependent dispersion to an analytical model with constant dispersion is made to illustrate the effects of the dispersion coefficients on the multispecies transport of decaying contaminants.

  11. Study on bending behaviour of nickel–titanium rotary endodontic instruments by analytical and numerical analyses

    PubMed Central

    Tsao, C C; Liou, J U; Wen, P H; Peng, C C; Liu, T S

    2013-01-01

    Aim To develop analytical models and analyse the stress distribution and flexibility of nickel–titanium (NiTi) instruments subject to bending forces. Methodology The analytical method was used to analyse the behaviours of NiTi instruments under bending forces. Two NiTi instruments (RaCe and Mani NRT) with different cross-sections and geometries were considered. Analytical results were derived using Euler–Bernoulli nonlinear differential equations that took into account the screw pitch variation of these NiTi instruments. In addition, the nonlinear deformation analysis based on the analytical model and the finite element nonlinear analysis was carried out. Numerical results are obtained by carrying out a finite element method. Results According to analytical results, the maximum curvature of the instrument occurs near the instrument tip. Results of the finite element analysis revealed that the position of maximum von Mises stress was near the instrument tip. Therefore, the proposed analytical model can be used to predict the position of maximum curvature in the instrument where fracture may occur. Finally, results of analytical and numerical models were compatible. Conclusion The proposed analytical model was validated by numerical results in analysing bending deformation of NiTi instruments. The analytical model is useful in the design and analysis of instruments. The proposed theoretical model is effective in studying the flexibility of NiTi instruments. Compared with the finite element method, the analytical model can deal conveniently and effectively with the subject of bending behaviour of rotary NiTi endodontic instruments. PMID:23173762

  12. Assessing Proposals for New Global Health Treaties: An Analytic Framework.

    PubMed

    Hoffman, Steven J; Røttingen, John-Arne; Frenk, Julio

    2015-08-01

    We have presented an analytic framework and 4 criteria for assessing when global health treaties have reasonable prospects of yielding net positive effects. First, there must be a significant transnational dimension to the problem being addressed. Second, the goals should justify the coercive nature of treaties. Third, proposed global health treaties should have a reasonable chance of achieving benefits. Fourth, treaties should be the best commitment mechanism among the many competing alternatives. Applying this analytic framework to 9 recent calls for new global health treaties revealed that none fully meet the 4 criteria. Efforts aiming to better use or revise existing international instruments may be more productive than is advocating new treaties.

  13. Assessing Proposals for New Global Health Treaties: An Analytic Framework

    PubMed Central

    Røttingen, John-Arne; Frenk, Julio

    2015-01-01

    We have presented an analytic framework and 4 criteria for assessing when global health treaties have reasonable prospects of yielding net positive effects. First, there must be a significant transnational dimension to the problem being addressed. Second, the goals should justify the coercive nature of treaties. Third, proposed global health treaties should have a reasonable chance of achieving benefits. Fourth, treaties should be the best commitment mechanism among the many competing alternatives. Applying this analytic framework to 9 recent calls for new global health treaties revealed that none fully meet the 4 criteria. Efforts aiming to better use or revise existing international instruments may be more productive than is advocating new treaties. PMID:26066926

  14. Framework for Human-Automation Collaboration: Conclusions from Four Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxstrand, Johanna; Le Blanc, Katya L.; O'Hara, John

    The Human Automation Collaboration (HAC) research project is investigating how advanced technologies that are planned for Advanced Small Modular Reactors (AdvSMR) will affect the performance and the reliability of the plant from a human factors and human performance perspective. The HAC research effort investigates the consequences of allocating functions between the operators and automated systems. More specifically, the research team is addressing how to best design the collaboration between the operators and the automated systems in a manner that has the greatest positive impact on overall plant performance and reliability. Oxstrand et al. (2013 - March) describes the efforts conductedmore » by the researchers to identify the research needs for HAC. The research team reviewed the literature on HAC, developed a model of HAC, and identified gaps in the existing knowledge of human-automation collaboration. As described in Oxstrand et al. (2013 – June), the team then prioritized the research topics identified based on the specific needs in the context of AdvSMR. The prioritization was based on two sources of input: 1) The preliminary functions and tasks, and 2) The model of HAC. As a result, three analytical studies were planned and conduced; 1) Models of Teamwork, 2) Standardized HAC Performance Measurement Battery, and 3) Initiators and Triggering Conditions for Adaptive Automation. Additionally, one field study was also conducted at Idaho Falls Power.« less

  15. A Dynamic Calibration Method for Experimental and Analytical Hub Load Comparison

    NASA Technical Reports Server (NTRS)

    Kreshock, Andrew R.; Thornburgh, Robert P.; Wilbur, Matthew L.

    2017-01-01

    This paper presents the results from an ongoing effort to produce improved correlation between analytical hub force and moment prediction and those measured during wind-tunnel testing on the Aeroelastic Rotor Experimental System (ARES), a conventional rotor testbed commonly used at the Langley Transonic Dynamics Tunnel (TDT). A frequency-dependent transformation between loads at the rotor hub and outputs of the testbed balance is produced from frequency response functions measured during vibration testing of the system. The resulting transformation is used as a dynamic calibration of the balance to transform hub loads predicted by comprehensive analysis into predicted balance outputs. In addition to detailing the transformation process, this paper also presents a set of wind-tunnel test cases, with comparisons between the measured balance outputs and transformed predictions from the comprehensive analysis code CAMRAD II. The modal response of the testbed is discussed and compared to a detailed finite-element model. Results reveal that the modal response of the testbed exhibits a number of characteristics that make accurate dynamic balance predictions challenging, even with the use of the balance transformation.

  16. Temperature field determination in slabs, circular plates and spheres with saw tooth heat generating sources

    NASA Astrophysics Data System (ADS)

    Diestra Cruz, Heberth Alexander

    The Green's functions integral technique is used to determine the conduction heat transfer temperature field in flat plates, circular plates, and solid spheres with saw tooth heat generating sources. In all cases the boundary temperature is specified (Dirichlet's condition) and the thermal conductivity is constant. The method of images is used to find the Green's function in infinite solids, semi-infinite solids, infinite quadrants, circular plates, and solid spheres. The saw tooth heat generation source has been modeled using Dirac delta function and Heaviside step function. The use of Green's functions allows obtain the temperature distribution in the form of an integral that avoids the convergence problems of infinite series. For the infinite solid and the sphere, the temperature distribution is three-dimensional and in the cases of semi-infinite solid, infinite quadrant and circular plate the distribution is two-dimensional. The method used in this work is superior to other methods because it obtains elegant analytical or quasi-analytical solutions to complex heat conduction problems with less computational effort and more accuracy than the use of fully numerical methods.

  17. Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Brown, Douglas L.

    1994-01-01

    In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.

  18. Post-seismic relaxation theory on laterally heterogeneous viscoelastic model

    USGS Publications Warehouse

    Pollitz, F.F.

    2003-01-01

    Investigation was carried out into the problem of relaxation of a laterally heterogeneous viscoelastic Earth following an impulsive moment release event. The formal solution utilizes a semi-analytic solution for post-seismic deformation on a laterally homogeneous Earth constructed from viscoelastic normal modes, followed by application of mode coupling theory to derive the response on the aspherical Earth. The solution is constructed in the Laplace transform domain using the correspondence principle and is valid for any linear constitutive relationship between stress and strain. The specific implementation described in this paper is a semi-analytic discretization method which assumes isotropic elastic structure and a Maxwell constitutive relation. It accounts for viscoelastic-gravitational coupling under lateral variations in elastic parameters and viscosity. For a given viscoelastic structure and minimum wavelength scale, the computational effort involved with the numerical algorithm is proportional to the volume of the laterally heterogeneous region. Examples are presented of the calculation of post-seismic relaxation with a shallow, laterally heterogeneous volume following synthetic impulsive seismic events, and they illustrate the potentially large effect of regional 3-D heterogeneities on regional deformation patterns.

  19. Efficient implementation of one- and two-component analytical energy gradients in exact two-component theory

    NASA Astrophysics Data System (ADS)

    Franzke, Yannick J.; Middendorf, Nils; Weigend, Florian

    2018-03-01

    We present an efficient algorithm for one- and two-component analytical energy gradients with respect to nuclear displacements in the exact two-component decoupling approach to the one-electron Dirac equation (X2C). Our approach is a generalization of the spin-free ansatz by Cheng and Gauss [J. Chem. Phys. 135, 084114 (2011)], where the perturbed one-electron Hamiltonian is calculated by solving a first-order response equation. Computational costs are drastically reduced by applying the diagonal local approximation to the unitary decoupling transformation (DLU) [D. Peng and M. Reiher, J. Chem. Phys. 136, 244108 (2012)] to the X2C Hamiltonian. The introduced error is found to be almost negligible as the mean absolute error of the optimized structures amounts to only 0.01 pm. Our implementation in TURBOMOLE is also available within the finite nucleus model based on a Gaussian charge distribution. For a X2C/DLU gradient calculation, computational effort scales cubically with the molecular size, while storage increases quadratically. The efficiency is demonstrated in calculations of large silver clusters and organometallic iridium complexes.

  20. Infiltration into soils: Conceptual approaches and solutions

    NASA Astrophysics Data System (ADS)

    Assouline, Shmuel

    2013-04-01

    Infiltration is a key process in aspects of hydrology, agricultural and civil engineering, irrigation design, and soil and water conservation. It is complex, depending on soil and rainfall properties and initial and boundary conditions within the flow domain. During the last century, a great deal of effort has been invested to understand the physics of infiltration and to develop quantitative predictors of infiltration dynamics. Jean-Yves Parlange and Wilfried Brutsaert have made seminal contributions, especially in the area of infiltration theory and related analytical solutions to the flow equations. This review retraces the landmark discoveries and the evolution of the conceptual approaches and the mathematical solutions applied to the problem of infiltration into porous media, highlighting the pivotal contributions of Parlange and Brutsaert. A historical retrospective of physical models of infiltration is followed by the presentation of mathematical methods leading to analytical solutions of the flow equations. This review then addresses the time compression approximation developed to estimate infiltration at the transition between preponding and postponding conditions. Finally, the effects of special conditions, such as the presence of air and heterogeneity in soil properties, on infiltration are considered.

  1. A Research Agenda for Helminth Diseases of Humans: Modelling for Control and Elimination

    PubMed Central

    Basáñez, María-Gloria; McCarthy, James S.; French, Michael D.; Yang, Guo-Jing; Walker, Martin; Gambhir, Manoj; Prichard, Roger K.; Churcher, Thomas S.

    2012-01-01

    Mathematical modelling of helminth infections has the potential to inform policy and guide research for the control and elimination of human helminthiases. However, this potential, unlike in other parasitic and infectious diseases, has yet to be realised. To place contemporary efforts in a historical context, a summary of the development of mathematical models for helminthiases is presented. These efforts are discussed according to the role that models can play in furthering our understanding of parasite population biology and transmission dynamics, and the effect on such dynamics of control interventions, as well as in enabling estimation of directly unobservable parameters, exploration of transmission breakpoints, and investigation of evolutionary outcomes of control. The Disease Reference Group on Helminth Infections (DRG4), established in 2009 by the Special Programme for Research and Training in Tropical Diseases (TDR), was given the mandate to review helminthiases research and identify research priorities and gaps. A research and development agenda for helminthiasis modelling is proposed based on identified gaps that need to be addressed for models to become useful decision tools that can support research and control operations effectively. This agenda includes the use of models to estimate the impact of large-scale interventions on infection incidence; the design of sampling protocols for the monitoring and evaluation of integrated control programmes; the modelling of co-infections; the investigation of the dynamical relationship between infection and morbidity indicators; the improvement of analytical methods for the quantification of anthelmintic efficacy and resistance; the determination of programme endpoints; the linking of dynamical helminth models with helminth geostatistical mapping; and the investigation of the impact of climate change on human helminthiases. It is concluded that modelling should be embedded in helminth research, and in the planning, evaluation, and surveillance of interventions from the outset. Modellers should be essential members of interdisciplinary teams, propitiating a continuous dialogue with end users and stakeholders to reflect public health needs in the terrain, discuss the scope and limitations of models, and update biological assumptions and model outputs regularly. It is highlighted that to reach these goals, a collaborative framework must be developed for the collation, annotation, and sharing of databases from large-scale anthelmintic control programmes, and that helminth modellers should join efforts to tackle key questions in helminth epidemiology and control through the sharing of such databases, and by using diverse, yet complementary, modelling approaches. PMID:22545162

  2. Liquid-cooling technology for gas turbines - Review and status

    NASA Technical Reports Server (NTRS)

    Van Fossen, G. J., Jr.; Stepka, F. S.

    1978-01-01

    After a brief review of past efforts involving the forced-convection cooling of gas turbines, the paper surveys the state of the art of the liquid cooling of gas turbines. Emphasis is placed on thermosyphon methods of cooling, including those utilizing closed, open, and closed-loop thermosyphons; other methods, including sweat, spray and stator cooling, are also discussed. The more significant research efforts, design data, correlations, and analytical methods are mentioned and voids in technology are summarized.

  3. Analytical applications of MIPs in diagnostic assays: future perspectives.

    PubMed

    Bedwell, Thomas S; Whitcombe, Michael J

    2016-03-01

    Many efforts have been made to produce artificial materials with biomimetic properties for applications in binding assays. Among these efforts, the technique of molecular imprinting has received much attention because of the high selectivity obtainable for molecules of interest, robustness of the produced polymers, simple and short synthesis, and excellent cost efficiency. In this review, progress in the field of molecularly imprinted sorbent assays is discussed-with a focus on work conducted from 2005 to date.

  4. Comparison of RAGE Hydrocode Mars Impact Model Results to Scaling Law Predictions

    NASA Astrophysics Data System (ADS)

    Plesko, Catherine S.; Wohletz, K. H.; Coker, R. F.; Asphaug, E.; Gittings, M. L.

    2007-10-01

    Impact devolatilization has been proposed by Segura et al. (2002) and Carr (1996) as a mechanism for triggering sporadic, intense precipitation on Mars. We seek to examine this hypothesis, specifically to determine the lower bound on possible energy/size scales, and thus an upper bound on the frequency of such events. To do this, we employ various analytical and numerical modeling techniques including the RAGE hydrocode. RAGE (Baltrusaitis et al. 1996) is an Eulerian Hydrocode that runs in up to three dimensions and incorporates a variety of detailed equations of state including the temperature-based SESAME tables maintained by LANL. In order to validate RAGE hydrocode results at the scale of moderate to large asteroid impacts, we compare simplified models of vertical impacts of objects of diameter 10 -100 km into homogeneous basalt targets under Martian conditions to pressure scaling law predictions (Holsapple 1993, e.g. Tables 3-4) for the same scenario. Peak pressures are important to the volatile mobilization question (Stewart and Ahrens, 2005), thus it is of primary importance for planned future modeling efforts to confirm that pressures in RAGE are well behaved. Knowledge of the final crater geometry and the fate of ejecta are not required to understand our main question: to what depth and radius are subsurface volatiles are mobilized, for a given impact and target? This effort is supported by LANL/IGPP (CSP, RFC, KHW, MLG) and by NASA PG&G "Small Bodies and Planetary Collisions" (EA).

  5. Predicting Rib Fracture Risk With Whole-Body Finite Element Models: Development and Preliminary Evaluation of a Probabilistic Analytical Framework

    PubMed Central

    Forman, Jason L.; Kent, Richard W.; Mroz, Krystoffer; Pipkorn, Bengt; Bostrom, Ola; Segui-Gomez, Maria

    2012-01-01

    This study sought to develop a strain-based probabilistic method to predict rib fracture risk with whole-body finite element (FE) models, and to describe a method to combine the results with collision exposure information to predict injury risk and potential intervention effectiveness in the field. An age-adjusted ultimate strain distribution was used to estimate local rib fracture probabilities within an FE model. These local probabilities were combined to predict injury risk and severity within the whole ribcage. The ultimate strain distribution was developed from a literature dataset of 133 tests. Frontal collision simulations were performed with the THUMS (Total HUman Model for Safety) model with four levels of delta-V and two restraints: a standard 3-point belt and a progressive 3.5–7 kN force-limited, pretensioned (FL+PT) belt. The results of three simulations (29 km/h standard, 48 km/h standard, and 48 km/h FL+PT) were compared to matched cadaver sled tests. The numbers of fractures predicted for the comparison cases were consistent with those observed experimentally. Combining these results with field exposure informantion (ΔV, NASS-CDS 1992–2002) suggests a 8.9% probability of incurring AIS3+ rib fractures for a 60 year-old restrained by a standard belt in a tow-away frontal collision with this restraint, vehicle, and occupant configuration, compared to 4.6% for the FL+PT belt. This is the first study to describe a probabilistic framework to predict rib fracture risk based on strains observed in human-body FE models. Using this analytical framework, future efforts may incorporate additional subject or collision factors for multi-variable probabilistic injury prediction. PMID:23169122

  6. CREATE-IP and CREATE-V: Data and Services Update

    NASA Astrophysics Data System (ADS)

    Carriere, L.; Potter, G. L.; Hertz, J.; Peters, J.; Maxwell, T. P.; Strong, S.; Shute, J.; Shen, Y.; Duffy, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center and the Earth System Grid Federation (ESGF) are working together to build a uniform environment for the comparative study and use of a group of reanalysis datasets of particular importance to the research community. This effort is called the Collaborative REAnalysis Technical Environment (CREATE) and it contains two components: the CREATE-Intercomparison Project (CREATE-IP) and CREATE-V. This year's efforts included generating and publishing an atmospheric reanalysis ensemble mean and spread and improving the analytics available through CREATE-V. Related activities included adding access to subsets of the reanalysis data through ArcGIS and expanding the visualization tool to GMAO forecast data. This poster will present the access mechanisms to this data and use cases including example Jupyter Notebook code. The reanalysis ensemble was generated using two methods, first using standard Python tools for regridding, extracting levels and creating the ensemble mean and spread on a virtual server in the NCCS environment. The second was using a new analytics software suite, the Earth Data Analytics Services (EDAS), coupled with a high-performance Data Analytics and Storage System (DASS) developed at the NCCS. Results were compared to validate the EDAS methodologies, and the results, including time to process, will be presented. The ensemble includes selected 6 hourly and monthly variables, regridded to 1.25 degrees, with 24 common levels used for the 3D variables. Use cases for the new data and services will be presented, including the use of EDAS for the backend analytics on CREATE-V, the use of the GMAO forecast aerosol and cloud data in CREATE-V, and the ability to connect CREATE-V data to NCCS ArcGIS services.

  7. Analytical reasoning task reveals limits of social learning in networks

    PubMed Central

    Rahwan, Iyad; Krasnoshtan, Dmytro; Shariff, Azim; Bonnefon, Jean-François

    2014-01-01

    Social learning—by observing and copying others—is a highly successful cultural mechanism for adaptation, outperforming individual information acquisition and experience. Here, we investigate social learning in the context of the uniquely human capacity for reflective, analytical reasoning. A hallmark of the human mind is its ability to engage analytical reasoning, and suppress false associative intuitions. Through a set of laboratory-based network experiments, we find that social learning fails to propagate this cognitive strategy. When people make false intuitive conclusions and are exposed to the analytic output of their peers, they recognize and adopt this correct output. But they fail to engage analytical reasoning in similar subsequent tasks. Thus, humans exhibit an ‘unreflective copying bias’, which limits their social learning to the output, rather than the process, of their peers’ reasoning—even when doing so requires minimal effort and no technical skill. In contrast to much recent work on observation-based social learning, which emphasizes the propagation of successful behaviour through copying, our findings identify a limit on the power of social networks in situations that require analytical reasoning. PMID:24501275

  8. Analytical reasoning task reveals limits of social learning in networks.

    PubMed

    Rahwan, Iyad; Krasnoshtan, Dmytro; Shariff, Azim; Bonnefon, Jean-François

    2014-04-06

    Social learning-by observing and copying others-is a highly successful cultural mechanism for adaptation, outperforming individual information acquisition and experience. Here, we investigate social learning in the context of the uniquely human capacity for reflective, analytical reasoning. A hallmark of the human mind is its ability to engage analytical reasoning, and suppress false associative intuitions. Through a set of laboratory-based network experiments, we find that social learning fails to propagate this cognitive strategy. When people make false intuitive conclusions and are exposed to the analytic output of their peers, they recognize and adopt this correct output. But they fail to engage analytical reasoning in similar subsequent tasks. Thus, humans exhibit an 'unreflective copying bias', which limits their social learning to the output, rather than the process, of their peers' reasoning-even when doing so requires minimal effort and no technical skill. In contrast to much recent work on observation-based social learning, which emphasizes the propagation of successful behaviour through copying, our findings identify a limit on the power of social networks in situations that require analytical reasoning.

  9. NASA Ames DEVELOP Interns Collaborate with the South Bay Salt Pond Restoration Project to Monitor and Study Restoration Efforts using NASA's Satellites

    NASA Technical Reports Server (NTRS)

    Newcomer, Michelle E.; Kuss, Amber Jean; Nguyen, Andrew; Schmidt, Cynthia L.

    2012-01-01

    In the past, natural tidal marshes in the south bay were segmented by levees and converted into ponds for use in salt production. In an effort to provide habitat for migratory birds and other native plants and animals, as well as to rebuild natural capital, the South Bay Salt Pond Restoration Project (SBSPRP) is focused on restoring a portion of the over 15,000 acres of wetlands in California's South San Francisco Bay. The process of restoration begins when a levee is breached; the bay water and sediment flow into the ponds and eventually restore natural tidal marshes. Since the spring of 2010 the NASA Ames Research Center (ARC) DEVELOP student internship program has collaborated with the South Bay Salt Pond Restoration Project (SBSPRP) to study the effects of these restoration efforts and to provide valuable information to assist in habitat management and ecological forecasting. All of the studies were based on remote sensing techniques -- NASA's area of expertise in the field of Earth Science, and used various analytical techniques such as predictive modeling, flora and fauna classification, and spectral detection, to name a few. Each study was conducted by a team of aspiring scientists as a part of the DEVELOP program at Ames.

  10. Allostatic Load and Effort-Reward Imbalance: Associations over the Working-Career

    PubMed Central

    Coronado, José Ignacio Cuitún; Chandola, Tarani; Steptoe, Andrew

    2018-01-01

    Although associations between work stressors and stress-related biomarkers have been reported in cross-sectional studies, the use of single time measurements of work stressors could be one of the reasons for inconsistent associations. This study examines whether repeated reports of work stress towards the end of the working career predicts allostatic load, a measure of chronic stress related physiological processes. Data from waves 2 to 6 of the English Longitudinal Study of Ageing (ELSA) were analysed, with a main analytical sample of 2663 older adults (aged 50+) who had at least one measurement of effort-reward imbalance between waves 2–6 and a measurement of allostatic load at wave 6. Cumulative work stress over waves 2–6 were measured by the effort-reward imbalance model. ELSA respondents who had reported two or more occasions of imbalance had a higher (0.3) estimate of the allostatic load index than those who did not report any imbalance, controlling for a range of health and socio-demographic factors, as well as allostatic load at baseline. More recent reports of imbalance were significantly associated with a higher allostatic load index, whereas reports of imbalance from earlier waves of ELSA were not. The accumulation of work related stressors could have adverse effects on chronic stress biological processes. PMID:29364177

  11. Allostatic Load and Effort-Reward Imbalance: Associations over the Working-Career.

    PubMed

    Coronado, José Ignacio Cuitún; Chandola, Tarani; Steptoe, Andrew

    2018-01-24

    Although associations between work stressors and stress-related biomarkers have been reported in cross-sectional studies, the use of single time measurements of work stressors could be one of the reasons for inconsistent associations. This study examines whether repeated reports of work stress towards the end of the working career predicts allostatic load, a measure of chronic stress related physiological processes. Data from waves 2 to 6 of the English Longitudinal Study of Ageing (ELSA) were analysed, with a main analytical sample of 2663 older adults (aged 50+) who had at least one measurement of effort-reward imbalance between waves 2-6 and a measurement of allostatic load at wave 6. Cumulative work stress over waves 2-6 were measured by the effort-reward imbalance model. ELSA respondents who had reported two or more occasions of imbalance had a higher (0.3) estimate of the allostatic load index than those who did not report any imbalance, controlling for a range of health and socio-demographic factors, as well as allostatic load at baseline. More recent reports of imbalance were significantly associated with a higher allostatic load index, whereas reports of imbalance from earlier waves of ELSA were not. The accumulation of work related stressors could have adverse effects on chronic stress biological processes.

  12. A genetic algorithm-based job scheduling model for big data analytics.

    PubMed

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  13. Implementation of GIS-based highway safety analyses : bridging the gap

    DOT National Transportation Integrated Search

    2001-01-01

    In recent years, efforts have been made to expand the analytical features of the Highway Safety Information System (HSIS) by integrating Geographic Information System (GIS) capabilities. The original version of the GIS Safety Analysis Tools was relea...

  14. Determination of Perfluorinated Compounds in the Upper Mississippi River Basin

    EPA Science Inventory

    Despite ongoing efforts to develop robust analytical methods for the determination of perfluorinated compounds (PFCs) such as perfluorooctanesulfonate (PFOS) and perfluorooctanoic acid (PFOA) in surface water, comparatively little has been published on method performance, and the...

  15. EPA Science Matters Newsletter: Chemical Warfare Agent Analytical Standards Facilitate Lab Testing (Published November 2013)

    EPA Pesticide Factsheets

    Learn about the EPA chemists' efforts to develop methods for detecting extremely low concentrations of nerve agents, such as sarin, VX, soman and cyclohexyl sarin, and the blister agent sulfur mustard.

  16. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1991-01-01

    Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.

  17. Toward Big Data Analytics: Review of Predictive Models in Management of Diabetes and Its Complications.

    PubMed

    Cichosz, Simon Lebech; Johansen, Mette Dencker; Hejlesen, Ole

    2015-10-14

    Diabetes is one of the top priorities in medical science and health care management, and an abundance of data and information is available on these patients. Whether data stem from statistical models or complex pattern recognition models, they may be fused into predictive models that combine patient information and prognostic outcome results. Such knowledge could be used in clinical decision support, disease surveillance, and public health management to improve patient care. Our aim was to review the literature and give an introduction to predictive models in screening for and the management of prevalent short- and long-term complications in diabetes. Predictive models have been developed for management of diabetes and its complications, and the number of publications on such models has been growing over the past decade. Often multiple logistic or a similar linear regression is used for prediction model development, possibly owing to its transparent functionality. Ultimately, for prediction models to prove useful, they must demonstrate impact, namely, their use must generate better patient outcomes. Although extensive effort has been put in to building these predictive models, there is a remarkable scarcity of impact studies. © 2015 Diabetes Technology Society.

  18. Estimate of the Reliability in Geological Forecasts for Tunnels: Toward a Structured Approach

    NASA Astrophysics Data System (ADS)

    Perello, Paolo

    2011-11-01

    In tunnelling, a reliable geological model often allows providing an effective design and facing the construction phase without unpleasant surprises. A geological model can be considered reliable when it is a valid support to correctly foresee the rock mass behaviour, therefore preventing unexpected events during the excavation. The higher the model reliability, the lower the probability of unforeseen rock mass behaviour. Unfortunately, owing to different reasons, geological models are affected by uncertainties and a fully reliable knowledge of the rock mass is, in most cases, impossible. Therefore, estimating to which degree a geological model is reliable, becomes a primary requirement in order to save time and money and to adopt the appropriate construction strategy. The definition of the geological model reliability is often achieved by engineering geologists through an unstructured analytical process and variable criteria. This paper focusses on geological models for projects of linear underground structures and represents an effort to analyse and include in a conceptual framework the factors influencing such models. An empirical parametric procedure is then developed with the aim of obtaining an index called "geological model rating (GMR)", which can be used to provide a more standardised definition of a geological model reliability.

  19. Analytical Techniques and Pharmacokinetics of Gastrodia elata Blume and Its Constituents.

    PubMed

    Wu, Jinyi; Wu, Bingchu; Tang, Chunlan; Zhao, Jinshun

    2017-07-08

    Gastrodia elata Blume ( G. elata ), commonly called Tianma in Chinese, is an important and notable traditional Chinese medicine (TCM), which has been used in China as an anticonvulsant, analgesic, sedative, anti-asthma, anti-immune drug since ancient times. The aim of this review is to provide an overview of the abundant efforts of scientists in developing analytical techniques and performing pharmacokinetic studies of G. elata and its constituents, including sample pretreatment methods, analytical techniques, absorption, distribution, metabolism, excretion (ADME) and influence factors to its pharmacokinetics. Based on the reported pharmacokinetic property data of G. elata and its constituents, it is hoped that more studies will focus on the development of rapid and sensitive analytical techniques, discovering new therapeutic uses and understanding the specific in vivo mechanisms of action of G. elata and its constituents from the pharmacokinetic viewpoint in the near future. The present review discusses analytical techniques and pharmacokinetics of G. elata and its constituents reported from 1985 onwards.

  20. Analytic cognitive style, not delusional ideation, predicts data gathering in a large beads task study.

    PubMed

    Ross, Robert M; Pennycook, Gordon; McKay, Ryan; Gervais, Will M; Langdon, Robyn; Coltheart, Max

    2016-07-01

    It has been proposed that deluded and delusion-prone individuals gather less evidence before forming beliefs than those who are not deluded or delusion-prone. The primary source of evidence for this "jumping to conclusions" (JTC) bias is provided by research that utilises the "beads task" data-gathering paradigm. However, the cognitive mechanisms subserving data gathering in this task are poorly understood. In the largest published beads task study to date (n = 558), we examined data gathering in the context of influential dual-process theories of reasoning. Analytic cognitive style (the willingness or disposition to critically evaluate outputs from intuitive processing and engage in effortful analytic processing) predicted data gathering in a non-clinical sample, but delusional ideation did not. The relationship between data gathering and analytic cognitive style suggests that dual-process theories of reasoning can contribute to our understanding of the beads task. It is not clear why delusional ideation was not found to be associated with data gathering or analytic cognitive style.

  1. Chemical Detection and Identification Techniques for Exobiology Flight Experiments

    NASA Technical Reports Server (NTRS)

    Kojiro, Daniel R.; Sheverev, Valery A.; Khromov, Nikolai A.

    2002-01-01

    Exobiology flight experiments require highly sensitive instrumentation for in situ analysis of the volatile chemical species that occur in the atmospheres and surfaces of various bodies within the solar system. The complex mixtures encountered place a heavy burden on the analytical Instrumentation to detect and identify all species present. The minimal resources available onboard for such missions mandate that the instruments provide maximum analytical capabilities with minimal requirements of volume, weight and consumables. Advances in technology may be achieved by increasing the amount of information acquired by a given technique with greater analytical capabilities and miniaturization of proven terrestrial technology. We describe here methods to develop analytical instruments for the detection and identification of a wide range of chemical species using Gas Chromatography. These efforts to expand the analytical capabilities of GC technology are focused on the development of detectors for the GC which provide sample identification independent of the GC retention time data. A novel new approach employs Penning Ionization Electron Spectroscopy (PIES).

  2. MS-Based Analytical Techniques: Advances in Spray-Based Methods and EI-LC-MS Applications

    PubMed Central

    Medina, Isabel; Cappiello, Achille; Careri, Maria

    2018-01-01

    Mass spectrometry is the most powerful technique for the detection and identification of organic compounds. It can provide molecular weight information and a wealth of structural details that give a unique fingerprint for each analyte. Due to these characteristics, mass spectrometry-based analytical methods are showing an increasing interest in the scientific community, especially in food safety, environmental, and forensic investigation areas where the simultaneous detection of targeted and nontargeted compounds represents a key factor. In addition, safety risks can be identified at the early stage through online and real-time analytical methodologies. In this context, several efforts have been made to achieve analytical instrumentation able to perform real-time analysis in the native environment of samples and to generate highly informative spectra. This review article provides a survey of some instrumental innovations and their applications with particular attention to spray-based MS methods and food analysis issues. The survey will attempt to cover the state of the art from 2012 up to 2017. PMID:29850370

  3. Big data and visual analytics in anaesthesia and health care.

    PubMed

    Simpao, A F; Ahumada, L M; Rehman, M A

    2015-09-01

    Advances in computer technology, patient monitoring systems, and electronic health record systems have enabled rapid accumulation of patient data in electronic form (i.e. big data). Organizations such as the Anesthesia Quality Institute and Multicenter Perioperative Outcomes Group have spearheaded large-scale efforts to collect anaesthesia big data for outcomes research and quality improvement. Analytics--the systematic use of data combined with quantitative and qualitative analysis to make decisions--can be applied to big data for quality and performance improvements, such as predictive risk assessment, clinical decision support, and resource management. Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces, and it can facilitate performance of cognitive activities involving big data. Ongoing integration of big data and analytics within anaesthesia and health care will increase demand for anaesthesia professionals who are well versed in both the medical and the information sciences. © The Author 2015. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Analytical Chemistry at the Interface Between Materials Science and Biology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Janese C.

    2000-09-21

    Likedlessentid sciences, anal~cd chetis~continues toreinvent itself. Moving beyond its traditional roles of identification and quantification, analytical chemistry is now expanding its frontiers into areas previously reserved to other disciplines. This work describes several research efforts that lie at the new interfaces between analytical chemistry and two of these disciplines, namely materials science and biology. In the materials science realm, the search for new materials that may have useful or unique chromatographic properties motivated the synthesis and characterization of electrically conductive sol-gels. In the biology realm, the search for new surface fabrication schemes that would permit or even improve the detectionmore » of specific biological reactions motivated the design of miniaturized biological arrays. Collectively, this work represents some of analytical chemistry’s newest forays into these disciplines. The introduction section to this dissertation provides a literature review on several of the key aspects of this work. In advance of the materials science discussion, a brief introduction into electrochemically-modulated liquid chromatography (EMLC) and sol-gel chemistry is provided. In advance of the biological discussions, brief overviews of scanning force microscopy (SFM) and the oxidative chemistry used to construct our biological arrays are provided. This section is followed by four chapters, each of which is presented as a separate manuscript, and focuses on work that describes some of our cross-disciplinary efforts within materials science and biology. This dissertation concludes with a general summary and future prospectus.« less

  5. Characterization of spacecraft humidity condensate

    NASA Technical Reports Server (NTRS)

    Muckle, Susan; Schultz, John R.; Sauer, Richard L.

    1994-01-01

    When construction of Space Station Freedom reaches the Permanent Manned Capability (PMC) stage, the Water Recovery and Management Subsystem will be fully operational such that (distilled) urine, spent hygiene water, and humidity condensate will be reclaimed to provide water of potable quality. The reclamation technologies currently baselined to process these waste waters include adsorption, ion exchange, catalytic oxidation, and disinfection. To ensure that the baseline technologies will be able to effectively remove those compounds presenting a health risk to the crew, the National Research Council has recommended that additional information be gathered on specific contaminants in waste waters representative of those to be encountered on the Space Station. With the application of new analytical methods and the analysis of waste water samples more representative of the Space Station environment, advances in the identification of the specific contaminants continue to be made. Efforts by the Water and Food Analytical Laboratory at JSC were successful in enlarging the database of contaminants in humidity condensate. These efforts have not only included the chemical characterization of condensate generated during ground-based studies, but most significantly the characterization of cabin and Spacelab condensate generated during Shuttle missions. The analytical results presented in this paper will be used to show how the composition of condensate varies amongst enclosed environments and thus the importance of collecting condensate from an environment close to that of the proposed Space Station. Although advances were made in the characterization of space condensate, complete characterization, particularly of the organics, requires further development of analytical methods.

  6. New Analytical Solution of the Equilibrium Ampere's Law Using the Walker's Method: a Didactic Example

    NASA Astrophysics Data System (ADS)

    Sousa, A. N. Laurindo; Ojeda-González, A.; Prestes, A.; Klausner, V.; Caritá, L. A.

    2018-02-01

    This work aims to demonstrate the analytical solution of the Grad-Shafranov (GS) equation or generalized Ampere's law, which is important in the studies of self-consistent 2.5-D solution for current sheet structures. A detailed mathematical development is presented to obtain the generating function as shown by Walker (RSPSA 91, 410, 1915). Therefore, we study the general solution of the GS equation in terms of the Walker's generating function in details without omitting any step. The Walker's generating function g( ζ) is written in a new way as the tangent of an unspecified function K( ζ). In this trend, the general solution of the GS equation is expressed as exp(- 2Ψ) = 4| K '( ζ)|2/cos2[ K( ζ) - K( ζ ∗)]. In order to investigate whether our proposal would simplify the mathematical effort to find new generating functions, we use Harris's solution as a test, in this case K( ζ) = arctan(exp( i ζ)). In summary, one of the article purposes is to present a review of the Harris's solution. In an attempt to find a simplified solution, we propose a new way to write the GS solution using g( ζ) = tan( K( ζ)). We also present a new analytical solution to the equilibrium Ampere's law using g( ζ) = cosh( b ζ), which includes a generalization of the Harris model and presents isolated magnetic islands.

  7. Adaption of a parallel-path poly(tetrafluoroethylene) nebulizer to an evaporative light scattering detector: Optimization and application to studies of poly(dimethylsiloxane) oligomers as a model polymer.

    PubMed

    Durner, Bernhard; Ehmann, Thomas; Matysik, Frank-Michael

    2018-06-05

    The adaption of an parallel-path poly(tetrafluoroethylene)(PTFE) ICP-nebulizer to an evaporative light scattering detector (ELSD) was realized. This was done by substituting the originally installed concentric glass nebulizer of the ELSD. The performance of both nebulizers was compared regarding nebulizer temperature, evaporator temperature, flow rate of nebulizing gas and flow rate of mobile phase of different solvents using caffeine and poly(dimethylsiloxane) (PDMS) as analytes. Both nebulizers showed similar performances but for the parallel-path PTFE nebulizer the performance was considerably better at low LC flow rates and the nebulizer lifetime was substantially increased. In general, for both nebulizers the highest sensitivity was obtained by applying the lowest possible evaporator temperature in combination with the highest possible nebulizer temperature at preferably low gas flow rates. Besides the optimization of detector parameters, response factors for various PDMS oligomers were determined and the dependency of the detector signal on molar mass of the analytes was studied. The significant improvement regarding long-term stability made the modified ELSD much more robust and saved time and money by reducing the maintenance efforts. Thus, especially in polymer HPLC, associated with a complex matrix situation, the PTFE-based parallel-path nebulizer exhibits attractive characteristics for analytical studies of polymers. Copyright © 2018. Published by Elsevier B.V.

  8. New Analytical Solution of the Equilibrium Ampere's Law Using the Walker's Method: a Didactic Example

    NASA Astrophysics Data System (ADS)

    Sousa, A. N. Laurindo; Ojeda-González, A.; Prestes, A.; Klausner, V.; Caritá, L. A.

    2017-12-01

    This work aims to demonstrate the analytical solution of the Grad-Shafranov (GS) equation or generalized Ampere's law, which is important in the studies of self-consistent 2.5-D solution for current sheet structures. A detailed mathematical development is presented to obtain the generating function as shown by Walker (RSPSA 91, 410, 1915). Therefore, we study the general solution of the GS equation in terms of the Walker's generating function in details without omitting any step. The Walker's generating function g(ζ) is written in a new way as the tangent of an unspecified function K(ζ). In this trend, the general solution of the GS equation is expressed as exp(- 2Ψ) = 4|K '(ζ)|2/cos2[K(ζ) - K(ζ ∗)]. In order to investigate whether our proposal would simplify the mathematical effort to find new generating functions, we use Harris's solution as a test, in this case K(ζ) = arctan(exp(i ζ)). In summary, one of the article purposes is to present a review of the Harris's solution. In an attempt to find a simplified solution, we propose a new way to write the GS solution using g(ζ) = tan(K(ζ)). We also present a new analytical solution to the equilibrium Ampere's law using g(ζ) = cosh(b ζ), which includes a generalization of the Harris model and presents isolated magnetic islands.

  9. Comparison of thermal analytic model with experimental test results for 30-sentimeter-diameter engineering model mercury ion thruster

    NASA Technical Reports Server (NTRS)

    Oglebay, J. C.

    1977-01-01

    A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.

  10. Wave Amplitude Dependent Engineering Model of Propellant Slosh in Spherical Tanks

    NASA Technical Reports Server (NTRS)

    Brodnick, Jacob; Westra, Douglas G.; Eberhart, Chad J.; Yang, Hong Q.; West, Jeffrey S.

    2016-01-01

    Liquid propellant slosh is often a concern for the controllability of flight vehicles. Anti-slosh devices are traditionally included in propellant tank designs to limit the amount of sloshing allowed during flight. These devices and any necessary supports can be quite heavy to meet various structural requirements. Some of the burden on anti-slosh devices can be relieved by exploiting the nonlinear behavior of slosh waves in bare smooth wall tanks. A nonlinear regime slosh model for bare spherical tanks was developed through a joint analytical and experimental effort by NASA/MSFC. The developed slosh model accounts for the large damping inherent in nonlinear slosh waves which is more accurate and drives conservatism from vehicle stability analyses that use traditional bare tank slosh models. A more accurate slosh model will result in more realistic predicted slosh forces during flight reducing or removing the need for active controls during a maneuver or baffles in the tank design. Lower control gains and smaller or fewer tank baffles can reduce cost and system complexity while increasing vehicle performance. Both Computational Fluid Dynamics (CFD) simulation and slosh testing of three different spherical tank geometries were performed to develop the proposed slosh model. Several important findings were made during this effort in addition to determining the parameters to the nonlinear regime slosh model. The linear regime slosh damping trend for spherical tanks reported in NASA SP-106 was shown to be inaccurate for certain regions of a tank. Additionally, transition to the nonlinear regime for spherical tanks was only found to occur at very large wave amplitudes in the lower hemisphere and was a strong function of the propellant fill level in the upper hemisphere. The nonlinear regime damping trend was also found to be a function of the propellant fill level.

  11. Analytical Modeling for the Bending Resonant Frequency of Multilayered Microresonators with Variable Cross-Section

    PubMed Central

    Herrera-May, Agustín L.; Aguilera-Cortés, Luz A.; Plascencia-Mora, Hector; Rodríguez-Morales, Ángel L.; Lu, Jian

    2011-01-01

    Multilayered microresonators commonly use sensitive coating or piezoelectric layers for detection of mass and gas. Most of these microresonators have a variable cross-section that complicates the prediction of their fundamental resonant frequency (generally of the bending mode) through conventional analytical models. In this paper, we present an analytical model to estimate the first resonant frequency and deflection curve of single-clamped multilayered microresonators with variable cross-section. The analytical model is obtained using the Rayleigh and Macaulay methods, as well as the Euler-Bernoulli beam theory. Our model is applied to two multilayered microresonators with piezoelectric excitation reported in the literature. Both microresonators are composed by layers of seven different materials. The results of our analytical model agree very well with those obtained from finite element models (FEMs) and experimental data. Our analytical model can be used to determine the suitable dimensions of the microresonator’s layers in order to obtain a microresonator that operates at a resonant frequency necessary for a particular application. PMID:22164071

  12. Ultrasonic power transfer from a spherical acoustic wave source to a free-free piezoelectric receiver: Modeling and experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shahab, S.; Gray, M.; Erturk, A., E-mail: alper.erturk@me.gatech.edu

    2015-03-14

    Contactless powering of small electronic components has lately received growing attention for wireless applications in which battery replacement or tethered charging is undesired or simply impossible, and ambient energy harvesting is not a viable solution. As an alternative to well-studied methods of contactless energy transfer, such as the inductive coupling method, the use of ultrasonic waves transmitted and received by piezoelectric devices enables larger power transmission distances, which is critical especially for deep-implanted electronic devices. Moreover, energy transfer by means of acoustic waves is well suited in situations where no electromagnetic fields are allowed. The limited literature of ultrasonic acousticmore » energy transfer is mainly centered on proof-of-concept experiments demonstrating the feasibility of this method, lacking experimentally validated modeling efforts for the resulting multiphysics problem that couples the source and receiver dynamics with domain acoustics. In this work, we present fully coupled analytical, numerical, and experimental multiphysics investigations for ultrasonic acoustic energy transfer from a spherical wave source to a piezoelectric receiver bar that operates in the 33-mode of piezoelectricity. The fluid-loaded piezoelectric receiver under free-free mechanical boundary conditions is shunted to an electrical load for quantifying the electrical power output for a given acoustic source strength of the transmitter. The analytical acoustic-piezoelectric structure interaction modeling framework is validated experimentally, and the effects of system parameters are reported along with optimal electrical loading and frequency conditions of the receiver.« less

  13. Performance of a Fuel-Cell-Powered, Small Electric Airplane Assessed

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    2004-01-01

    Rapidly emerging fuel-cell-power technologies may be used to launch a new revolution of electric propulsion systems for light aircraft. Future small electric airplanes using fuel cell technologies hold the promise of high reliability, low maintenance, low noise, and - with the exception of water vapor - zero emissions. An analytical feasibility and performance assessment was conducted by NASA Glenn Research Center's Airbreathing Systems Analysis Office of a fuel-cell-powered, propeller-driven, small electric airplane based on a model of the MCR-01 two-place kitplane (Dyn'Aero, Darois, France). This assessment was conducted in parallel with an ongoing effort by the Advanced Technology Products Corporation and the Foundation for Advancing Science and Technology Education. Their project - partially funded by a NASA grant - is to design, build, and fly the first manned, continuously propelled, nongliding electric airplane. In our study, an analytical performance model of a proton exchange membrane (PEM) fuel cell propulsion system was developed and applied to a notional, two-place light airplane modeled after the MCR-01 kitplane. The PEM fuel cell stack was fed pure hydrogen fuel and humidified ambient air via a small automotive centrifugal supercharger. The fuel cell performance models were based on chemical reaction analyses calibrated with published data from the fledgling U.S. automotive fuel cell industry. Electric propeller motors, rated at two shaft power levels in separate assessments, were used to directly drive a two-bladed, variable-pitch propeller. Fuel sources considered were compressed hydrogen gas and cryogenic liquid hydrogen. Both of these fuel sources provided pure, contaminant-free hydrogen for the PEM cells.

  14. Optronic System Imaging Simulator (OSIS): imager simulation tool of the ECOMOS project

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2018-04-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defense and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses two approaches to calculate Target Acquisition (TA) ranges, the analytical TRM4 model and the image-based Triangle Orientation Discrimination model (TOD). In this paper the IR imager simulation tool, Optronic System Imaging Simulator (OSIS), is presented. It produces virtual camera imagery required by the TOD approach. Pristine imagery is degraded by various effects caused by atmospheric attenuation, optics, detector footprint, sampling, fixed pattern noise, temporal noise and digital signal processing. Resulting images might be presented to observers or could be further processed for automatic image quality calculations. For convenience OSIS incorporates camera descriptions and intermediate results provided by TRM4. For input OSIS uses pristine imagery tied with meta information about scene content, its physical dimensions, and gray level interpretation. These images represent planar targets placed at specified distances to the imager. Furthermore, OSIS is extended by a plugin functionality that enables integration of advanced digital signal processing techniques in ECOMOS such as compression, local contrast enhancement, digital turbulence mitiga- tion, to name but a few. By means of this image-based approach image degradations and image enhancements can be investigated, which goes beyond the scope of the analytical TRM4 model.

  15. Mechanisms of chemical vapor generation by aqueous tetrahydridoborate. Recent developments toward the definition of a more general reaction model

    NASA Astrophysics Data System (ADS)

    D'Ulivo, Alessandro

    2016-05-01

    A reaction model describing the reactivity of metal and semimetal species with aqueous tetrahydridoborate (THB) has been drawn taking into account the mechanism of chemical vapor generation (CVG) of hydrides, recent evidences on the mechanism of interference and formation of byproducts in arsane generation, and other evidences in the field of the synthesis of nanoparticles and catalytic hydrolysis of THB by metal nanoparticles. The new "non-analytical" reaction model is of more general validity than the previously described "analytical" reaction model for CVG. The non-analytical model is valid for reaction of a single analyte with THB and for conditions approaching those typically encountered in the synthesis of nanoparticles and macroprecipitates. It reduces to the previously proposed analytical model under conditions typically employed in CVG for trace analysis (analyte below the μM level, borane/analyte ≫ 103 mol/mol, no interference). The non-analytical reaction model is not able to explain all the interference effects observed in CVG, which can be achieved only by assuming the interaction among the species of reaction pathways of different analytical substrates. The reunification of CVG, the synthesis of nanoparticles by aqueous THB and the catalytic hydrolysis of THB inside a common frame contribute to rationalization of the complex reactivity of aqueous THB with metal and semimetal species.

  16. Modeling of the Global Water Cycle - Analytical Models

    Treesearch

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  17. Cost effectiveness of recycling: a systems model.

    PubMed

    Tonjes, David J; Mallikarjun, Sreekanth

    2013-11-01

    Financial analytical models of waste management systems have often found that recycling costs exceed direct benefits, and in order to economically justify recycling activities, externalities such as household expenses or environmental impacts must be invoked. Certain more empirically based studies have also found that recycling is more expensive than disposal. Other work, both through models and surveys, have found differently. Here we present an empirical systems model, largely drawn from a suburban Long Island municipality. The model accounts for changes in distribution of effort as recycling tonnages displace disposal tonnages, and the seven different cases examined all show that curbside collection programs that manage up to between 31% and 37% of the waste stream should result in overall system savings. These savings accrue partially because of assumed cost differences in tip fees for recyclables and disposed wastes, and also because recycling can result in a more efficient, cost-effective collection program. These results imply that increases in recycling are justifiable due to cost-savings alone, not on more difficult to measure factors that may not impact program budgets. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Groundwater Vulnerability Assessment of the Pingtung Plain in Southern Taiwan.

    PubMed

    Liang, Ching-Ping; Jang, Cheng-Shin; Liang, Cheng-Wei; Chen, Jui-Sheng

    2016-11-23

    In the Pingtung Plain of southern Taiwan, elevated levels of NO₃ - -N in groundwater have been reported. Therefore, efforts for assessing groundwater vulnerability are required as part of the critical steps to prevent and control groundwater pollution. This study makes a groundwater vulnerability assessment for the Pingtung Plain using an improved overlay and index-based DRASTIC model. The improvement of the DRASTIC model is achieved by reassigning the weighting coefficients of the factors in this model with the help of a discriminant analysis statistical method. The analytical results obtained from the improved DRASTIC model provide a reliable prediction for use in groundwater vulnerability assessment to nitrate pollution and can correctly identify the groundwater protection zones in the Pingtung Plain. Moreover, the results of the sensitivity analysis conducted for the seven parameters in the improved DRASTIC model demonstrate that the aquifer media (A) is the most sensitive factor when the nitrate-N concentration is below 2.5 mg/L. For the cases where the nitrate-N concentration is above 2.5 mg/L, the aquifer media (A) and net recharge (R) are the two most important factors.

  19. Pilot Study for Definition of Track Component Load Environments

    DOT National Transportation Integrated Search

    1981-02-01

    This report describes the results of an experimental and analytical effort to define the vehicle induced load environment in an at-grade, concrete tie/ballast transit track structure. The experiment was performed on the UMTA transit track oval at the...

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rim, Jung H.; Kuhn, Kevin J.; Tandon, Lav

    Nuclear forensics techniques, including micro-XRF, gamma spectrometry, trace elemental analysis and isotopic/chronometric characterization were used to interrogate two, potentially related plutonium metal foils. These samples were submitted for analysis with only limited production information, and a comprehensive suite of forensic analyses were performed. Resulting analytical data was paired with available reactor model and historical information to provide insight into the materials’ properties, origins, and likely intended uses. Both were super-grade plutonium, containing less than 3% 240Pu, and age-dating suggested that most recent chemical purification occurred in 1948 and 1955 for the respective metals. Additional consideration of reactor modelling feedback andmore » trace elemental observables indicate plausible U.S. reactor origin associated with the Hanford site production efforts. In conclusion, based on this investigation, the most likely intended use for these plutonium foils was 239Pu fission foil targets for physics experiments, such as cross-section measurements, etc.« less

Top