Sample records for capability case model

  1. Mixed Phase Modeling in GlennICE with Application to Engine Icing

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Jorgenson, Philip C. E.; Veres, Joseph P.

    2011-01-01

    A capability for modeling ice crystals and mixed phase icing has been added to GlennICE. Modifications have been made to the particle trajectory algorithm and energy balance to model this behavior. This capability has been added as part of a larger effort to model ice crystal ingestion in aircraft engines. Comparisons have been made to four mixed phase ice accretions performed in the Cox icing tunnel in order to calibrate an ice erosion model. A sample ice ingestion case was performed using the Energy Efficient Engine (E3) model in order to illustrate current capabilities. Engine performance characteristics were supplied using the Numerical Propulsion System Simulation (NPSS) model for this test case.

  2. The role of production and teamwork practices in construction safety: a cognitive model and an empirical case study.

    PubMed

    Mitropoulos, Panagiotis Takis; Cupido, Gerardo

    2009-01-01

    In construction, the challenge for researchers and practitioners is to develop work systems (production processes and teams) that can achieve high productivity and high safety at the same time. However, construction accident causation models ignore the role of work practices and teamwork. This study investigates the mechanisms by which production and teamwork practices affect the likelihood of accidents. The paper synthesizes a new model for construction safety based on the cognitive perspective (Fuller's Task-Demand-Capability Interface model, 2005) and then presents an exploratory case study. The case study investigates and compares the work practices of two residential framing crews: a 'High Reliability Crew' (HRC)--that is, a crew with exceptional productivity and safety over several years, and an average performing crew from the same company. The model explains how the production and teamwork practices generate the work situations that workers face (the task demands) and affect the workers ability to cope (capabilities). The case study indicates that the work practices of the HRC directly influence the task demands and match them with the applied capabilities. These practices were guided by the 'principle' of avoiding errors and rework and included work planning and preparation, work distribution, managing the production pressures, and quality and behavior monitoring. The Task Demand-Capability model links construction research to a cognitive model of accident causation and provides a new way to conceptualize safety as an emergent property of the production practices and teamwork processes. The empirical evidence indicates that the crews' work practices and team processes strongly affect the task demands, the applied capabilities, and the match between demands and capabilities. The proposed model and the exploratory case study will guide further discovery of work practices and teamwork processes that can increase both productivity and safety in construction operations. Such understanding will enable training of construction foremen and crews in these practices to systematically develop high reliability crews.

  3. Experiment evaluates ocean models and data assimiliation in the Gulf Stream

    NASA Astrophysics Data System (ADS)

    Willems, Robert C.; Glenn, S. M.; Crowley, M. F.; Malanotte-Rizzoli, P.; Young, R. E.; Ezer, T.; Mellor, G. L.; Arango, H. G.; Robinson, A. R.; Lai, C.-C. A.

    Using data sets of known quality as the basis for comparison, a recent experiment explored the Gulf Stream Region at 27°-47°N and 80°-50°W to assess the nowcast/forecast capability of specific ocean models and the impact of data assimilation. Scientists from five universities and the Naval Research Laboratory/Stennis Space Center participated in the Data Assimilation and Model Evaluation Experiment (DAMEÉ-GSR).DAMEÉ-GSR was based on case studies, each successively more complex, and was divided into three phases using case studies (data) from 1987 and 1988. Phase I evaluated models' forecast capability using common initial conditions and comparing model forecast fields with observational data at forecast time over a 2-week period. Phase II added data assimilation and assessed its impact on forecast capability, using the same case studies as in phase I, and phase III added a 2-month case study overlapping some periods in Phases I and II.

  4. LEWICE 2.2 Capabilities and Thermal Validation

    NASA Technical Reports Server (NTRS)

    Wright, William B.

    2002-01-01

    A computational model of bleed air anti-icing and electrothermal de-icing have been added to the LEWICE 2.0 software by integrating the capabilities of two previous programs, ANTICE and LEWICE/ Thermal. This combined model has been released as LEWICE version 2.2. Several advancements have also been added to the previous capabilities of each module. This report will present the capabilities of the software package and provide results for both bleed air and electrothermal cases. A comprehensive validation effort has also been performed to compare the predictions to an existing electrothermal database. A quantitative comparison shows that for deicing cases, the average difference is 9.4 F (26%) compared to 3 F for the experimental data while for evaporative cases the average difference is 2 F (32%) compared to an experimental error of 4 F.

  5. Best Practices for Evaluating the Capability of Nondestructive Evaluation (NDE) and Structural Health Monitoring (SHM) Techniques for Damage Characterization (Post-Print)

    DTIC Science & Technology

    2016-02-10

    a wide range of part, environmental and damage conditions. Best practices of using models are presented for both an eddy current NDE sizing and...to assess the reliability of NDE and SHM characterization capability. Best practices of using models are presented for both an eddy current NDE... EDDY CURRENT NDE CASE STUDY An eddy current crack sizing case study is presented to highlight examples of some of these complex characteristics of

  6. A CASE STUDY USING THE EPA'S WATER QUALITY MODELING SYSTEM, THE WINDOWS INTERFACE FOR SIMULATING PLUMES (WISP)

    EPA Science Inventory

    Wisp, the Windows Interface for Simulating Plumes, is designed to be an easy-to-use windows platform program for aquatic modeling. Wisp inherits many of its capabilities from its predecessor, the DOS-based PLUMES (Baumgartner, Frick, Roberts, 1994). These capabilities have been ...

  7. Simulator for concurrent processing data flow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.; Stoughton, John W.; Mielke, Roland R.

    1992-01-01

    A software simulator capability of simulating execution of an algorithm graph on a given system under the Algorithm to Architecture Mapping Model (ATAMM) rules is presented. ATAMM is capable of modeling the execution of large-grained algorithms on distributed data flow architectures. Investigating the behavior and determining the performance of an ATAMM based system requires the aid of software tools. The ATAMM Simulator presented is capable of determining the performance of a system without having to build a hardware prototype. Case studies are performed on four algorithms to demonstrate the capabilities of the ATAMM Simulator. Simulated results are shown to be comparable to the experimental results of the Advanced Development Model System.

  8. Robot Lies in Health Care: When Is Deception Morally Permissible?

    PubMed

    Matthias, Andreas

    2015-06-01

    Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot's workings, capabilities, and internal structure. The robot's real capabilities may diverge from this mental model to the extent that one might accuse the robot's manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). This poses the question, whether misleading or even actively deceiving the user of an autonomous artifact about the capabilities of the machine is morally bad and why. By analyzing trust, autonomy, and the erosion of trust in communicative acts as consequences of deceptive robot behavior, we formulate four criteria that must be fulfilled in order for robot deception to be morally permissible, and in some cases even morally indicated.

  9. Remote Sensing Operational Capabilities

    DTIC Science & Technology

    1999-10-01

    systems. In each of the cases orbital and sensor characteristics were modeled , as was the possible impact of weather over target areas. In each of the...collect the desired information quickly, it is imperative that the satellite be capable of accessing the target area frequently. • Flexibility and...be capable of accessing the target area frequently. 66 • Flexibility and speed in tasking: the system should be capable of collecting data with a

  10. Mathematical Basis and Test Cases for Colloid-Facilitated Radionuclide Transport Modeling in GDSA-PFLOTRAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimus, Paul William

    This report provides documentation of the mathematical basis for a colloid-facilitated radionuclide transport modeling capability that can be incorporated into GDSA-PFLOTRAN. It also provides numerous test cases against which the modeling capability can be benchmarked once the model is implemented numerically in GDSA-PFLOTRAN. The test cases were run using a 1-D numerical model developed by the author, and the inputs and outputs from the 1-D model are provided in an electronic spreadsheet supplement to this report so that all cases can be reproduced in GDSA-PFLOTRAN, and the outputs can be directly compared with the 1-D model. The cases include examplesmore » of all potential scenarios in which colloid-facilitated transport could result in the accelerated transport of a radionuclide relative to its transport in the absence of colloids. Although it cannot be claimed that all the model features that are described in the mathematical basis were rigorously exercised in the test cases, the goal was to test the features that matter the most for colloid-facilitated transport; i.e., slow desorption of radionuclides from colloids, slow filtration of colloids, and equilibrium radionuclide partitioning to colloids that is strongly favored over partitioning to immobile surfaces, resulting in a substantial fraction of radionuclide mass being associated with mobile colloids.« less

  11. Representing Geospatial Environment Observation Capability Information: A Case Study of Managing Flood Monitoring Sensors in the Jinsha River Basin

    PubMed Central

    Hu, Chuli; Guan, Qingfeng; Li, Jie; Wang, Ke; Chen, Nengcheng

    2016-01-01

    Sensor inquirers cannot understand comprehensive or accurate observation capability information because current observation capability modeling does not consider the union of multiple sensors nor the effect of geospatial environmental features on the observation capability of sensors. These limitations result in a failure to discover credible sensors or plan for their collaboration for environmental monitoring. The Geospatial Environmental Observation Capability (GEOC) is proposed in this study and can be used as an information basis for the reliable discovery and collaborative planning of multiple environmental sensors. A field-based GEOC (GEOCF) information representation model is built. Quintuple GEOCF feature components and two GEOCF operations are formulated based on the geospatial field conceptual framework. The proposed GEOCF markup language is used to formalize the proposed GEOCF. A prototype system called GEOCapabilityManager is developed, and a case study is conducted for flood observation in the lower reaches of the Jinsha River Basin. The applicability of the GEOCF is verified through the reliable discovery of flood monitoring sensors and planning for the collaboration of these sensors. PMID:27999247

  12. Representing Geospatial Environment Observation Capability Information: A Case Study of Managing Flood Monitoring Sensors in the Jinsha River Basin.

    PubMed

    Hu, Chuli; Guan, Qingfeng; Li, Jie; Wang, Ke; Chen, Nengcheng

    2016-12-16

    Sensor inquirers cannot understand comprehensive or accurate observation capability information because current observation capability modeling does not consider the union of multiple sensors nor the effect of geospatial environmental features on the observation capability of sensors. These limitations result in a failure to discover credible sensors or plan for their collaboration for environmental monitoring. The Geospatial Environmental Observation Capability (GEOC) is proposed in this study and can be used as an information basis for the reliable discovery and collaborative planning of multiple environmental sensors. A field-based GEOC (GEOCF) information representation model is built. Quintuple GEOCF feature components and two GEOCF operations are formulated based on the geospatial field conceptual framework. The proposed GEOCF markup language is used to formalize the proposed GEOCF. A prototype system called GEOCapabilityManager is developed, and a case study is conducted for flood observation in the lower reaches of the Jinsha River Basin. The applicability of the GEOCF is verified through the reliable discovery of flood monitoring sensors and planning for the collaboration of these sensors.

  13. Toward a new generation of agricultural system data, models, and knowledge products: State of agricultural systems science.

    PubMed

    Jones, James W; Antle, John M; Basso, Bruno; Boote, Kenneth J; Conant, Richard T; Foster, Ian; Godfray, H Charles J; Herrero, Mario; Howitt, Richard E; Janssen, Sander; Keating, Brian A; Munoz-Carpena, Rafael; Porter, Cheryl H; Rosenzweig, Cynthia; Wheeler, Tim R

    2017-07-01

    We review the current state of agricultural systems science, focusing in particular on the capabilities and limitations of agricultural systems models. We discuss the state of models relative to five different Use Cases spanning field, farm, landscape, regional, and global spatial scales and engaging questions in past, current, and future time periods. Contributions from multiple disciplines have made major advances relevant to a wide range of agricultural system model applications at various spatial and temporal scales. Although current agricultural systems models have features that are needed for the Use Cases, we found that all of them have limitations and need to be improved. We identified common limitations across all Use Cases, namely 1) a scarcity of data for developing, evaluating, and applying agricultural system models and 2) inadequate knowledge systems that effectively communicate model results to society. We argue that these limitations are greater obstacles to progress than gaps in conceptual theory or available methods for using system models. New initiatives on open data show promise for addressing the data problem, but there also needs to be a cultural change among agricultural researchers to ensure that data for addressing the range of Use Cases are available for future model improvements and applications. We conclude that multiple platforms and multiple models are needed for model applications for different purposes. The Use Cases provide a useful framework for considering capabilities and limitations of existing models and data.

  14. Toward a New Generation of Agricultural System Data, Models, and Knowledge Products: State of Agricultural Systems Science

    NASA Technical Reports Server (NTRS)

    Jones, James W.; Antle, John M.; Basso, Bruno; Boote, Kenneth J.; Conant, Richard T.; Foster, Ian; Godfray, H. Charles J.; Herrero, Mario; Howitt, Richard E.; Janssen, Sander; hide

    2016-01-01

    We review the current state of agricultural systems science, focusing in particular on the capabilities and limitations of agricultural systems models. We discuss the state of models relative to five different Use Cases spanning field, farm, landscape, regional, and global spatial scales and engaging questions in past, current, and future time periods. Contributions from multiple disciplines have made major advances relevant to a wide range of agricultural system model applications at various spatial and temporal scales. Although current agricultural systems models have features that are needed for the Use Cases, we found that all of them have limitations and need to be improved. We identified common limitations across all Use Cases, namely 1) a scarcity of data for developing, evaluating, and applying agricultural system models and 2) inadequate knowledge systems that effectively communicate model results to society. We argue that these limitations are greater obstacles to progress than gaps in conceptual theory or available methods for using system models. New initiatives on open data show promise for addressing the data problem, but there also needs to be a cultural change among agricultural researchers to ensure that data for addressing the range of Use Cases are available for future model improvements and applications. We conclude that multiple platforms and multiple models are needed for model applications for different purposes. The Use Cases provide a useful framework for considering capabilities and limitations of existing models and data.

  15. Toward a new generation of agricultural system data, models, and knowledge products: State of agricultural systems science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, James W.; Antle, John M.; Basso, Bruno

    We review the current state of agricultural systems science, focusing in particular on the capabilities and limitations of agricultural systems models. We discuss the state of models relative to five different Use Cases spanning field, farm, landscape, regional, and global spatial scales and engaging questions in past, current, and future time periods. Contributions from multiple disciplines have made major advances relevant to a wide range of agricultural system model applications at various spatial and temporal scales. Although current agricultural systems models have features that are needed for the Use Cases, we found that all of them have limitations and needmore » to be improved. We identified common limitations across all Use Cases, namely 1) a scarcity of data for developing, evaluating, and applying agricultural system models and 2) inadequate knowledge systems that effectively communicate model results to society. We argue that these limitations are greater obstacles to progress than gaps in conceptual theory or available methods for using system models. New initiatives on open data show promise for addressing the data problem, but there also needs to be a cultural change among agricultural researchers to ensure that data for addressing the range of Use Cases are available for future model improvements and applications. We conclude that multiple platforms and multiple models are needed for model applications for different purposes. The Use Cases provide a useful framework for considering capabilities and limitations of existing models and data.« less

  16. Capabilities and applications of the Program to Optimize Simulated Trajectories (POST). Program summary document

    NASA Technical Reports Server (NTRS)

    Brauer, G. L.; Cornick, D. E.; Stevenson, R.

    1977-01-01

    The capabilities and applications of the three-degree-of-freedom (3DOF) version and the six-degree-of-freedom (6DOF) version of the Program to Optimize Simulated Trajectories (POST) are summarized. The document supplements the detailed program manuals by providing additional information that motivates and clarifies basic capabilities, input procedures, applications and computer requirements of these programs. The information will enable prospective users to evaluate the programs, and to determine if they are applicable to their problems. Enough information is given to enable managerial personnel to evaluate the capabilities of the programs and describes the POST structure, formulation, input and output procedures, sample cases, and computer requirements. The report also provides answers to basic questions concerning planet and vehicle modeling, simulation accuracy, optimization capabilities, and general input rules. Several sample cases are presented.

  17. A Study of Fan Stage/Casing Interaction Models

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Carney, Kelly; Gallardo, Vicente

    2003-01-01

    The purpose of the present study is to investigate the performance of several existing and new, blade-case interactions modeling capabilities that are compatible with the large system simulations used to capture structural response during blade-out events. Three contact models are examined for simulating the interactions between a rotor bladed disk and a case: a radial and linear gap element and a new element based on a hydrodynamic formulation. The first two models are currently available in commercial finite element codes such as NASTRAN and have been showed to perform adequately for simulating rotor-case interactions. The hydrodynamic model, although not readily available in commercial codes, may prove to be better able to characterize rotor-case interactions.

  18. On the pursuit of a nuclear development capability: The case of the Cuban nuclear program

    NASA Astrophysics Data System (ADS)

    Benjamin-Alvarado, Jonathan Calvert

    1998-09-01

    While there have been many excellent descriptive accounts of modernization schemes in developing states, energy development studies based on prevalent modernization theory have been rare. Moreover, heretofore there have been very few analyses of efforts to develop a nuclear energy capability by developing states. Rarely have these analyses employed social science research methodologies. The purpose of this study was to develop a general analytical framework, based on such a methodology to analyze nuclear energy development and to utilize this framework for the study of the specific case of Cuba's decision to develop nuclear energy. The analytical framework developed focuses on a qualitative tracing of the process of Cuban policy objectives and implementation to develop a nuclear energy capability, and analyzes the policy in response to three models of modernization offered to explain the trajectory of policy development. These different approaches are the politically motivated modernization model, the economic and technological modernization model and the economic and energy security model. Each model provides distinct and functionally differentiated expectations for the path of development toward this objective. Each model provides expected behaviors to external stimuli that would result in specific policy responses. In the study, Cuba's nuclear policy responses to stimuli from domestic constraints and intensities, institutional development, and external influences are analyzed. The analysis revealed that in pursuing the nuclear energy capability, Cuba primarily responded by filtering most of the stimuli through the twin objectives of economic rationality and technological advancement. Based upon the Cuban policy responses to the domestic and international stimuli, the study concluded that the economic and technological modernization model of nuclear energy development offered a more complete explanation of the trajectory of policy development than either the politically-motivated or economic and energy security models. The findings of this case pose some interesting questions for the general study of energy programs in developing states. By applying the analytical framework employed in this study to a number of other cases, perhaps the understanding of energy development schemes may be expanded through future research.

  19. RESOLVING NEIGHBORHOOD-SCALE AIR TOXICS MODELING: A CASE STUDY IN WILMINGTON, CALIFORNIA

    EPA Science Inventory

    Air quality modeling is useful for characterizing exposures to air pollutants. While models typically provide results on regional scales, there is a need for refined modeling approaches capable of resolving concentrations on the scale of tens of meters, across modeling domains 1...

  20. Computer software tool REALM for sustainable water allocation and management.

    PubMed

    Perera, B J C; James, B; Kularathna, M D U

    2005-12-01

    REALM (REsource ALlocation Model) is a generalised computer simulation package that models harvesting and bulk distribution of water resources within a water supply system. It is a modeling tool, which can be applied to develop specific water allocation models. Like other water resource simulation software tools, REALM uses mass-balance accounting at nodes, while the movement of water within carriers is subject to capacity constraints. It uses a fast network linear programming algorithm to optimise the water allocation within the network during each simulation time step, in accordance with user-defined operating rules. This paper describes the main features of REALM and provides potential users with an appreciation of its capabilities. In particular, it describes two case studies covering major urban and rural water supply systems. These case studies illustrate REALM's capabilities in the use of stochastically generated data in water supply planning and management, modelling of environmental flows, and assessing security of supply issues.

  1. Chemical Modeling for Studies of GeoTRACE Capabilities

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Geostationary measurements of tropospheric pollutants with high spatial and temporal resolution will revolutionize the understanding and predictions of the chemically linked global pollutants aerosols and ozone. However, the capabilities of proposed geostationary instruments, particularly GeoTRACE, have not been thoroughly studied with model simulations. Such model simulations are important to answer the questions and allay the concerns that have been expressed in the atmospheric sciences community about the feasibility of such measurements. We proposed a suite of chemical transport model simulations using the EPA Models 3 chemical transport model, which obtains its meteorology from the MM-5 mesoscale model. The model output consists of gridded abundances of chemical pollutants and meteorological parameters every 30-60 minutes for cases that have occurred in the Eastern United States. This output was intended to be used to test the GeoTRACE capability to retrieve the tropospheric columns of these pollutants.

  2. Best practices for evaluating the capability of nondestructive evaluation (NDE) and structural health monitoring (SHM) techniques for damage characterization

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Annis, Charles; Sabbagh, Harold A.; Lindgren, Eric A.

    2016-02-01

    A comprehensive approach to NDE and SHM characterization error (CE) evaluation is presented that follows the framework of the `ahat-versus-a' regression analysis for POD assessment. Characterization capability evaluation is typically more complex with respect to current POD evaluations and thus requires engineering and statistical expertise in the model-building process to ensure all key effects and interactions are addressed. Justifying the statistical model choice with underlying assumptions is key. Several sizing case studies are presented with detailed evaluations of the most appropriate statistical model for each data set. The use of a model-assisted approach is introduced to help assess the reliability of NDE and SHM characterization capability under a wide range of part, environmental and damage conditions. Best practices of using models are presented for both an eddy current NDE sizing and vibration-based SHM case studies. The results of these studies highlight the general protocol feasibility, emphasize the importance of evaluating key application characteristics prior to the study, and demonstrate an approach to quantify the role of varying SHM sensor durability and environmental conditions on characterization performance.

  3. The role of outside-school factors in science education: a two-stage theoretical model linking Bourdieu and Sen, with a case study

    NASA Astrophysics Data System (ADS)

    Gokpinar, Tuba; Reiss, Michael

    2016-05-01

    The literature in science education highlights the potentially significant role of outside-school factors such as parents, cultural contexts and role models in students' formation of science attitudes and aspirations, and their attainment in science classes. In this paper, building on and linking Bourdieu's key concepts of habitus, cultural and social capital, and field with Sen's capability approach, we develop a model of students' science-related capability development. Our model proposes that the role of outside-school factors is twofold, first, in providing an initial set of science-related resources (i.e. habitus, cultural and social capital), and then in conversion of these resources to science-related capabilities. The model also highlights the distinction between science-related functionings (outcomes achieved by individuals) and science-related capabilities (ability to achieve desired functionings), and argues that it is necessary to consider science-related capability development in evaluating the effectiveness of science education. We then test our theoretical model with an account of three Turkish immigrant students' science-related capabilities and the role of outside-school factors in forming and extending these capabilities. We use student and parent interviews, student questionnaires and in-class observations to provide an analysis of how outside-school factors influence these students' attitudes, aspirations and attainment in science.

  4. OSATE Overview & Community Updates

    DTIC Science & Technology

    2015-02-15

    update 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Delange /Julien 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK...main language capabilities Modeling patterns & model samples for beginners Error-Model examples EMV2 model constructs Demonstration of tools Case

  5. Modeling a Civil Event Case Study for Consequence Management Using the IMPRINT Forces Module

    NASA Technical Reports Server (NTRS)

    Gacy, Marc; Gosakan, Mala; Eckdahl, Angela; Miller, Jeffrey R.

    2012-01-01

    A critical challenge in the Consequence Management (CM) domain is the appropriate allocation of necessary and skilled military and civilian personnel and materiel resources in unexpected emergencies. To aid this process we used the Forces module in the Improved Performance Research Integration Tool (IMPRINT). This module enables analysts to enter personnel and equipment capabilities, prioritized schedules and numbers available, along with unexpected emergency requirements in order to assess force response requirements. Using a suspected terrorist threat on a college campus, we developed a test case model which exercised the capabilities of the module, including the scope and scale of operations. The model incorporates data from multiple sources, including daily schedules and frequency of events such as fire calls. Our preliminary results indicate that the model can predict potential decreases in civilian emergency response coverage due to an involved unplanned incident requiring significant portions of police, fire and civil responses teams.

  6. A robot sets a table: a case for hybrid reasoning with different types of knowledge

    NASA Astrophysics Data System (ADS)

    Mansouri, Masoumeh; Pecora, Federico

    2016-09-01

    An important contribution of AI to Robotics is the model-centred approach, whereby competent robot behaviour stems from automated reasoning in models of the world which can be changed to suit different environments, physical capabilities and tasks. However models need to capture diverse (and often application-dependent) aspects of the robot's environment and capabilities. They must also have good computational properties, as robots need to reason while they act in response to perceived context. In this article, we investigate the use of a meta-CSP-based technique to interleave reasoning in diverse knowledge types. We reify the approach through a robotic waiter case study, for which a particular selection of spatial, temporal, resource and action KR formalisms is made. Using this case study, we discuss general principles pertaining to the selection of appropriate KR formalisms and jointly reasoning about them. The resulting integration is evaluated both formally and experimentally on real and simulated robotic platforms.

  7. Revised Reynolds Stress and Triple Product Models

    NASA Technical Reports Server (NTRS)

    Olsen, Michael E.; Lillard, Randolph P.

    2017-01-01

    Revised versions of Lag methodology Reynolds-stress and triple product models are applied to accepted test cases to assess the improvement, or lack thereof, in the prediction capability of the models. The Bachalo-Johnson bump flow is shown as an example for this abstract submission.

  8. Partial Validation of Multibody Program to Optimize Simulated Trajectories II (POST II) Parachute Simulation With Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Ben; Queen, Eric M.

    2002-01-01

    A capability to simulate trajectories Of Multiple interacting rigid bodies has been developed. This capability uses the Program to Optimize Simulated Trajectories II (POST II). Previously, POST II had the ability to simulate multiple bodies without interacting forces. The current implementation is used for the Simulation of parachute trajectories, in which the parachute and suspended bodies can be treated as rigid bodies. An arbitrary set of connecting lines can be included in the model and are treated as massless spring-dampers. This paper discusses details of the connection line modeling and results of several test cases used to validate the capability.

  9. Hybrid supply chain model for material requirement planning under financial constraints: A case study

    NASA Astrophysics Data System (ADS)

    Curci, Vita; Dassisti, Michele; Josefa, Mula Bru; Manuel, Díaz Madroñero

    2014-10-01

    Supply chain model (SCM) are potentially capable to integrate different aspects in supporting decision making for enterprise management tasks. The aim of the paper is to propose an hybrid mathematical programming model for optimization of production requirements resources planning. The preliminary model was conceived bottom-up from a real industrial case analysed oriented to maximize cash flow. Despite the intense computational effort required to converge to a solution, optimisation done brought good result in solving the objective function.

  10. Interfacing the Generalized Fluid System Simulation Program with the SINDA/G Thermal Program

    NASA Technical Reports Server (NTRS)

    Schallhorn, Paul; Palmiter, Christopher; Farmer, Jeffery; Lycans, Randall; Tiller, Bruce

    2000-01-01

    A general purpose, one dimensional fluid flow code has been interfaced with the thermal analysis program SINDA/G. The flow code, GFSSP, is capable of analyzing steady state and transient flow in a complex network. The flow code is capable of modeling several physical phenomena including compressibility effects, phase changes, body forces (such as gravity and centrifugal) and mixture thermodynamics for multiple species. The addition of GFSSP to SINDA/G provides a significant improvement in convective heat transfer modeling for SINDA/G. The interface development was conducted in two phases. This paper describes the first (which allows for steady and quasi-steady - unsteady solid, steady fluid - conjugate heat transfer modeling). The second (full transient conjugate heat transfer modeling) phase of the interface development will be addressed in a later paper. Phase 1 development has been benchmarked to an analytical solution with excellent agreement. Additional test cases for each development phase demonstrate desired features of the interface. The results of the benchmark case, three additional test cases and a practical application are presented herein.

  11. The Community WRF-Hydro Modeling System Version 4 Updates: Merging Toward Capabilities of the National Water Model

    NASA Astrophysics Data System (ADS)

    McAllister, M.; Gochis, D.; Dugger, A. L.; Karsten, L. R.; McCreight, J. L.; Pan, L.; Rafieeinasab, A.; Read, L. K.; Sampson, K. M.; Yu, W.

    2017-12-01

    The community WRF-Hydro modeling system is publicly available and provides researchers and operational forecasters a flexible and extensible capability for performing multi-scale, multi-physics options for hydrologic modeling that can be run independent or fully-interactive with the WRF atmospheric model. The core WRF-Hydro physics model contains very high-resolution descriptions of terrestrial hydrologic process representations such as land-atmosphere exchanges of energy and moisture, snowpack evolution, infiltration, terrain routing, channel routing, basic reservoir representation and hydrologic data assimilation. Complementing the core physics components of WRF-Hydro are an ecosystem of pre- and post-processing tools that facilitate the preparation of terrain and meteorological input data, an open-source hydrologic model evaluation toolset (Rwrfhydro), hydrologic data assimilation capabilities with DART and advanced model visualization capabilities. The National Center for Atmospheric Research (NCAR), through collaborative support from the National Science Foundation and other funding partners, provides community support for the entire WRF-Hydro system through a variety of mechanisms. This presentation summarizes the enhanced user support capabilities that are being developed for the community WRF-Hydro modeling system. These products and services include a new website, open-source code repositories, documentation and user guides, test cases, online training materials, live, hands-on training sessions, an email list serve, and individual user support via email through a new help desk ticketing system. The WRF-Hydro modeling system and supporting tools which now include re-gridding scripts and model calibration have recently been updated to Version 4 and are merging toward capabilities of the National Water Model.

  12. Meeting the Growing Demand for Sustainability-Focused Management Education: A Case Study of a PRME Academic Institution

    ERIC Educational Resources Information Center

    Young, Suzanne; Nagpal, Swati

    2013-01-01

    The current business landscape has created the impetus to develop management graduates with capabilities that foster responsible leadership and sustainability. Through the lens of Gitsham's 3C Model (Complexity, Context and Connection) of graduate capabilities, this paper discusses the experience of implementing the United Nations Principles for…

  13. Using Evaluation To Build Organizational Performance and Learning Capability: A Strategy and a Method.

    ERIC Educational Resources Information Center

    Brinkerhoff, Robert O.; Dressler, Dennis

    2002-01-01

    Discusses the causes of variability of training impact and problems with previous models for evaluation of training. Presents the Success Case Evaluation approach as a way to measure the impact of training and build learning capability to increase the business value of training by focusing on a small number of trainees. (Author/LRW)

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa

    The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less

  15. Determination of the Underlying Task Scheduling Algorithm for an Ada Runtime System

    DTIC Science & Technology

    1989-12-01

    was also curious as to how well I could model the test cases with Ada programs . In particular, I wanted to see whether I could model the equal arrival...parameter relationshis=s required to detect the execution of individual algorithms. These test cases were modeled using Ada programs . Then, the...results were analyzed to determine whether the Ada programs were capable of revealing the task scheduling algorithm used by the Ada run-time system. This

  16. Investigations of environmental effects on freeway acoustics.

    DOT National Transportation Integrated Search

    2013-05-01

    We present a generalized terrain PE (GTPE) model for sound propagation in non-uniform terrain following the work of Sack and West (1995). Results for simplified terrain cases illustrate the new models capabilities and the effects of terrain in a n...

  17. System performance predictions for Space Station Freedom's electric power system

    NASA Technical Reports Server (NTRS)

    Kerslake, Thomas W.; Hojnicki, Jeffrey S.; Green, Robert D.; Follo, Jeffrey C.

    1993-01-01

    Space Station Freedom Electric Power System (EPS) capability to effectively deliver power to housekeeping and user loads continues to strongly influence Freedom's design and planned approaches for assembly and operations. The EPS design consists of silicon photovoltaic (PV) arrays, nickel-hydrogen batteries, and direct current power management and distribution hardware and cabling. To properly characterize the inherent EPS design capability, detailed system performance analyses must be performed for early stages as well as for the fully assembled station up to 15 years after beginning of life. Such analyses were repeatedly performed using the FORTRAN code SPACE (Station Power Analysis for Capability Evaluation) developed at the NASA Lewis Research Center over a 10-year period. SPACE combines orbital mechanics routines, station orientation/pointing routines, PV array and battery performance models, and a distribution system load-flow analysis to predict EPS performance. Time-dependent, performance degradation, low earth orbit environmental interactions, and EPS architecture build-up are incorporated in SPACE. Results from two typical SPACE analytical cases are presented: (1) an electric load driven case and (2) a maximum EPS capability case.

  18. Simulation and Modeling Capability for Standard Modular Hydropower Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Kevin M.; Smith, Brennan T.; Witt, Adam M.

    Grounded in the stakeholder-validated framework established in Oak Ridge National Laboratory’s SMH Exemplary Design Envelope Specification, this report on Simulation and Modeling Capability for Standard Modular Hydropower (SMH) Technology provides insight into the concepts, use cases, needs, gaps, and challenges associated with modeling and simulating SMH technologies. The SMH concept envisions a network of generation, passage, and foundation modules that achieve environmentally compatible, cost-optimized hydropower using standardization and modularity. The development of standardized modeling approaches and simulation techniques for SMH (as described in this report) will pave the way for reliable, cost-effective methods for technology evaluation, optimization, and verification.

  19. BASINS and WEPP Climate Assessment Tools (CAT): Case Study Guide to Potential Applications (External Review Draft)

    EPA Science Inventory

    This draft report supports application of two recently developed water modeling tools, the BASINS and WEPP climate assessment tools. The report presents a series of short case studies designed to illustrate the capabilities of these tools for conducting scenario based assessments...

  20. Numerical model estimating the capabilities and limitations of the fast Fourier transform technique in absolute interferometry

    NASA Astrophysics Data System (ADS)

    Talamonti, James J.; Kay, Richard B.; Krebs, Danny J.

    1996-05-01

    A numerical model was developed to emulate the capabilities of systems performing noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation by using Hanning, Blackman, and Gaussian windows in the fast Fourier transform technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer. By processing computer-simulated data through our model, we project the ultimate precision for ideal data, and data containing AM-FM noise. The precision is shown to be limited by nonlinearities in the laser scan. absolute distance, interferometer.

  1. Computer-Aided System Engineering and Analysis (CASE/A) Programmer's Manual, Version 5.0

    NASA Technical Reports Server (NTRS)

    Knox, J. C.

    1996-01-01

    The Computer Aided System Engineering and Analysis (CASE/A) Version 5.0 Programmer's Manual provides the programmer and user with information regarding the internal structure of the CASE/A 5.0 software system. CASE/A 5.0 is a trade study tool that provides modeling/simulation capabilities for analyzing environmental control and life support systems and active thermal control systems. CASE/A has been successfully used in studies such as the evaluation of carbon dioxide removal in the space station. CASE/A modeling provides a graphical and command-driven interface for the user. This interface allows the user to construct a model by placing equipment components in a graphical layout of the system hardware, then connect the components via flow streams and define their operating parameters. Once the equipment is placed, the simulation time and other control parameters can be set to run the simulation based on the model constructed. After completion of the simulation, graphical plots or text files can be obtained for evaluation of the simulation results over time. Additionally, users have the capability to control the simulation and extract information at various times in the simulation (e.g., control equipment operating parameters over the simulation time or extract plot data) by using "User Operations (OPS) Code." This OPS code is written in FORTRAN with a canned set of utility subroutines for performing common tasks. CASE/A version 5.0 software runs under the VAX VMS(Trademark) environment. It utilizes the Tektronics 4014(Trademark) graphics display system and the VTIOO(Trademark) text manipulation/display system.

  2. Existing and Required Modeling Capabilities for Evaluating ATM Systems and Concepts

    NASA Technical Reports Server (NTRS)

    Odoni, Amedeo R.; Bowman, Jeremy; Delahaye, Daniel; Deyst, John J.; Feron, Eric; Hansman, R. John; Khan, Kashif; Kuchar, James K.; Pujet, Nicolas; Simpson, Robert W.

    1997-01-01

    ATM systems throughout the world are entering a period of major transition and change. The combination of important technological developments and of the globalization of the air transportation industry has necessitated a reexamination of some of the fundamental premises of existing Air Traffic Management (ATM) concepts. New ATM concepts have to be examined, concepts that may place more emphasis on: strategic traffic management; planning and control; partial decentralization of decision-making; and added reliance on the aircraft to carry out strategic ATM plans, with ground controllers confined primarily to a monitoring and supervisory role. 'Free Flight' is a case in point. In order to study, evaluate and validate such new concepts, the ATM community will have to rely heavily on models and computer-based tools/utilities, covering a wide range of issues and metrics related to safety, capacity and efficiency. The state of the art in such modeling support is adequate in some respects, but clearly deficient in others. It is the objective of this study to assist in: (1) assessing the strengths and weaknesses of existing fast-time models and tools for the study of ATM systems and concepts and (2) identifying and prioritizing the requirements for the development of additional modeling capabilities in the near future. A three-stage process has been followed to this purpose: 1. Through the analysis of two case studies involving future ATM system scenarios, as well as through expert assessment, modeling capabilities and supporting tools needed for testing and validating future ATM systems and concepts were identified and described. 2. Existing fast-time ATM models and support tools were reviewed and assessed with regard to the degree to which they offer the capabilities identified under Step 1. 3 . The findings of 1 and 2 were combined to draw conclusions about (1) the best capabilities currently existing, (2) the types of concept testing and validation that can be carried out reliably with such existing capabilities and (3) the currently unavailable modeling capabilities that should receive high priority for near-term research and development. It should be emphasized that the study is concerned only with the class of 'fast time' analytical and simulation models. 'Real time' models, that typically involve humans-in-the-loop, comprise another extensive class which is not addressed in this report. However, the relationship between some of the fast-time models reviewed and a few well-known real-time models is identified in several parts of this report and the potential benefits from the combined use of these two classes of models-a very important subject-are discussed in chapters 4 and 7.

  3. An Investigation of Bomb Cyclogenesis in NCEP's CFS Model

    NASA Astrophysics Data System (ADS)

    Alvarez, F. M.; Eichler, T.; Gottschalck, J.

    2008-12-01

    With the concerns, impacts and consequences of climate change increasing, the need for climate models to simulate daily weather is very important. Given the improvements in resolution and physical parameterizations, climate models are becoming capable of resolving extreme weather events. A particular type of extreme event which has large impacts on transportation, industry and the general public is a rapidly intensifying cyclone referred to as a "bomb." In this study, bombs are investigated using the National Center for Environmental Prediction's (NCEP) Climate Forecast System (CFS) model. We generate storm tracks based on 6-hourly sea-level pressure (SLP) from long-term climate runs of the CFS model. Investigation of this dataset has revealed that the CFS model is capable of producing bombs. We show a case study of a bomb in the CFS model and demonstrate that it has characteristics similar to the observed. Since the CFS model is capable of producing bombs, future work will focus on trends in their frequency and intensity so that an assessment of the potential role of the bomb in climate change can be assessed.

  4. Evaluating Pillar Industry’s Transformation Capability: A Case Study of Two Chinese Steel-Based Cities

    PubMed Central

    Li, Zhidong; Marinova, Dora; Guo, Xiumei; Gao, Yuan

    2015-01-01

    Many steel-based cities in China were established between the 1950s and 1960s. After more than half a century of development and boom, these cities are starting to decline and industrial transformation is urgently needed. This paper focuses on evaluating the transformation capability of resource-based cities building an evaluation model. Using Text Mining and the Document Explorer technique as a way of extracting text features, the 200 most frequently used words are derived from 100 publications related to steel- and other resource-based cities. The Expert Evaluation Method (EEM) and Analytic Hierarchy Process (AHP) techniques are then applied to select 53 indicators, determine their weights and establish an index system for evaluating the transformation capability of the pillar industry of China’s steel-based cities. Using real data and expert reviews, the improved Fuzzy Relation Matrix (FRM) method is applied to two case studies in China, namely Panzhihua and Daye, and the evaluation model is developed using Fuzzy Comprehensive Evaluation (FCE). The cities’ abilities to carry out industrial transformation are evaluated with concerns expressed for the case of Daye. The findings have policy implications for the potential and required industrial transformation in the two selected cities and other resource-based towns. PMID:26422266

  5. Evaluating Pillar Industry's Transformation Capability: A Case Study of Two Chinese Steel-Based Cities.

    PubMed

    Li, Zhidong; Marinova, Dora; Guo, Xiumei; Gao, Yuan

    2015-01-01

    Many steel-based cities in China were established between the 1950s and 1960s. After more than half a century of development and boom, these cities are starting to decline and industrial transformation is urgently needed. This paper focuses on evaluating the transformation capability of resource-based cities building an evaluation model. Using Text Mining and the Document Explorer technique as a way of extracting text features, the 200 most frequently used words are derived from 100 publications related to steel- and other resource-based cities. The Expert Evaluation Method (EEM) and Analytic Hierarchy Process (AHP) techniques are then applied to select 53 indicators, determine their weights and establish an index system for evaluating the transformation capability of the pillar industry of China's steel-based cities. Using real data and expert reviews, the improved Fuzzy Relation Matrix (FRM) method is applied to two case studies in China, namely Panzhihua and Daye, and the evaluation model is developed using Fuzzy Comprehensive Evaluation (FCE). The cities' abilities to carry out industrial transformation are evaluated with concerns expressed for the case of Daye. The findings have policy implications for the potential and required industrial transformation in the two selected cities and other resource-based towns.

  6. The importance of data curation on QSAR Modeling - PHYSPROP open data as a case study. (QSAR 2016)

    EPA Science Inventory

    During the last few decades many QSAR models and tools have been developed at the US EPA, including the widely used EPISuite. During this period the arsenal of computational capabilities supporting cheminformatics has broadened dramatically with multiple software packages. These ...

  7. Technology, FID, and Afghanistan: A Model for Aviation Capacity

    DTIC Science & Technology

    2017-04-05

    Force. Through case study, it analyzes how FID definitions and goals eroded under political pressure. Following this, Afghanistan is used to show...national aviation technology capacity, where these nations are weak, and which societal strengths to leverage. Case studies demonstrate how it can be...the other way around. In the case of Afghanistan, the U.S. Air Force (USAF) attempted to cultivate advanced aviation capabilities within a low

  8. BASINS and WEPP Climate Assessment Tools (CAT): Case ...

    EPA Pesticide Factsheets

    This draft report supports application of two recently developed water modeling tools, the BASINS and WEPP climate assessment tools. The report presents a series of short case studies designed to illustrate the capabilities of these tools for conducting scenario based assessments of the potential future effects of climate change on water resources. This report presents a series of short, illustrative case studies using the BASINS and WEPP climate assessment tools.

  9. Use of advanced modeling techniques to optimize thermal packaging designs.

    PubMed

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed during its validation. Thermal packaging is routinely used by the pharmaceutical industry to provide passive and active temperature control of their thermally sensitive products from manufacture through end use (termed the cold chain). In this study, the authors focus on passive temperature control (passive control does not require any external energy source and is entirely based on specific and/or latent heat of shipper components). As temperature-sensitive pharmaceuticals are being transported over longer distances, cold chain reliability is essential. To achieve reliability, a significant amount of time and resources must be invested in design, test, and production of optimized temperature-controlled packaging solutions. To shorten the cumbersome trial and error approach (design/test/design/test …), computer simulation (virtual prototyping and testing of thermal shippers) is a promising method. Although several companies have attempted to develop such a tool, there has been limited success to date. Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a coupled conductive/convective-based thermal shipper. A modeling technique capable of correctly capturing shipper thermal behavior can be used to develop packaging designs more quickly, reducing up-front costs while also improving shipper performance.

  10. Insights into Broker - User interactions from the BCube Project

    NASA Astrophysics Data System (ADS)

    Santoro, M.; Nativi, S.; Pearlman, J.; Khalsa, S. J. S.; Fulweiler, R. W.

    2015-12-01

    Introducing a broad brokering capability for science interoperability and cross-disciplinary research has many challenges and perspectives. Developing a business model that is sustainable is one aspect. Engaging and supporting the science research community is a second. In working with this community, significant added value must be provided. Various facets of the broker capability from discovery and access to data transformations and mapping are elements that were examined and applied to science use cases. In this presentation, we look at these facets and their benefits and challenges for specific use cases in the areas of ocean, coastal and arctic research . Specific recommendations for future implementations will be discussed.

  11. Creating Regional Futures: A Scenario-Based Inter- and Transdisciplinary Case Study as a Model for Applied Student-Centred Learning in Geography

    ERIC Educational Resources Information Center

    Fromhold-Eisebith, Martina; Freyer, Bernhard; Mose, Ingo; Muhar, Andreas; Vilsmaier, Ulli

    2009-01-01

    Human geography students face changing qualification requirements due to a shift towards new topics, educational tasks and professional options regarding issues of spatial development. This "practical turn" raises the importance of inter- and transdisciplinary work, management and capability building skills, with case study projects and…

  12. Extension of HCDstruct for Transonic Aeroservoelastic Analysis of Unconventional Aircraft Concepts

    NASA Technical Reports Server (NTRS)

    Quinlan, Jesse R.; Gern, Frank H.

    2017-01-01

    A substantial effort has been made to implement an enhanced aerodynamic modeling capability in the Higher-fidelity Conceptual Design and structural optimization tool. This additional capability is needed for a rapid, physics-based method of modeling advanced aircraft concepts at risk of structural failure due to dynamic aeroelastic instabilities. To adequately predict these instabilities, in particular for transonic applications, a generalized aerodynamic matching algorithm was implemented to correct the doublet-lattice model available in Nastran using solution data from a priori computational fluid dynamics anal- ysis. This new capability is demonstrated for two tube-and-wing aircraft configurations, including a Boeing 737-200 for implementation validation and the NASA D8 as a first use case. Results validate the current implementation of the aerodynamic matching utility and demonstrate the importance of using such a method for aircraft configurations featuring fuselage-wing aerodynamic interaction.

  13. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  14. An Observation Capability Semantic-Associated Approach to the Selection of Remote Sensing Satellite Sensors: A Case Study of Flood Observations in the Jinsha River Basin

    PubMed Central

    Hu, Chuli; Li, Jie; Lin, Xin

    2018-01-01

    Observation schedules depend upon the accurate understanding of a single sensor’s observation capability and the interrelated observation capability information on multiple sensors. The general ontologies for sensors and observations are abundant. However, few observation capability ontologies for satellite sensors are available, and no study has described the dynamic associations among the observation capabilities of multiple sensors used for integrated observational planning. This limitation results in a failure to realize effective sensor selection. This paper develops a sensor observation capability association (SOCA) ontology model that is resolved around the task-sensor-observation capability (TSOC) ontology pattern. The pattern is developed considering the stimulus-sensor-observation (SSO) ontology design pattern, which focuses on facilitating sensor selection for one observation task. The core aim of the SOCA ontology model is to achieve an observation capability semantic association. A prototype system called SemOCAssociation was developed, and an experiment was conducted for flood observations in the Jinsha River basin in China. The results of this experiment verified that the SOCA ontology based association method can help sensor planners intuitively and accurately make evidence-based sensor selection decisions for a given flood observation task, which facilitates efficient and effective observational planning for flood satellite sensors. PMID:29883425

  15. An Observation Capability Semantic-Associated Approach to the Selection of Remote Sensing Satellite Sensors: A Case Study of Flood Observations in the Jinsha River Basin.

    PubMed

    Hu, Chuli; Li, Jie; Lin, Xin; Chen, Nengcheng; Yang, Chao

    2018-05-21

    Observation schedules depend upon the accurate understanding of a single sensor’s observation capability and the interrelated observation capability information on multiple sensors. The general ontologies for sensors and observations are abundant. However, few observation capability ontologies for satellite sensors are available, and no study has described the dynamic associations among the observation capabilities of multiple sensors used for integrated observational planning. This limitation results in a failure to realize effective sensor selection. This paper develops a sensor observation capability association (SOCA) ontology model that is resolved around the task-sensor-observation capability (TSOC) ontology pattern. The pattern is developed considering the stimulus-sensor-observation (SSO) ontology design pattern, which focuses on facilitating sensor selection for one observation task. The core aim of the SOCA ontology model is to achieve an observation capability semantic association. A prototype system called SemOCAssociation was developed, and an experiment was conducted for flood observations in the Jinsha River basin in China. The results of this experiment verified that the SOCA ontology based association method can help sensor planners intuitively and accurately make evidence-based sensor selection decisions for a given flood observation task, which facilitates efficient and effective observational planning for flood satellite sensors.

  16. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  17. Multi-Robot Search for a Moving Target: Integrating World Modeling, Task Assignment and Context

    DTIC Science & Technology

    2016-12-01

    Case Study Our approach to coordination was initially motivated and developed in RoboCup soccer games. In fact, it has been first deployed on a team of...features a rather accurate model of the behavior and capabilities of the humanoid robot in the field. In the soccer case study , our goal is to...on experiments carried out with a team of humanoid robots in a soccer scenario and a team of mobile bases in an office environment. I. INTRODUCTION

  18. Using long-term ground-based HSRL and geostationary observations in combination with model re-analysis to help disentangle local and long-range transported aerosols in Seoul, South Korea

    NASA Astrophysics Data System (ADS)

    Phillips, C.; Holz, R.; Eloranta, E. W.; Reid, J. S.; Kim, S. W.; Kuehn, R.; Marais, W.

    2017-12-01

    The University of Wisconsin High Spectral Resolution Lidar (HSRL) has been continuously operating at Seoul National University as part of the Korea-United States Air Quality Study (KORUS-AQ). The instrument was installed in March of 2016 and continues to operate as of August 2017, providing a truly unique data set to monitor aerosol and cloud properties. With its capability to separate the molecular and particulate scattering, the HSRL is able to detect extremely thin aerosol layers with sub-molecular scattering sensitivity. The system deployed in Seoul has depolarization measurements at 532 nm as well as a near IR channel at 1064 nm providing discrimination between dust, smoke, pollution, water clouds, and ice clouds. As will be presented, these capabilities can be used to produce three channel combined RGB images that provide visualization of small changes in the aerosol properties. A primary motivation of KORUS-AQ was to determine the relative effects of transported pollution and local pollution on air quality in Seoul. We hypothesize that HSRL-based image analysis algorithms combined with satellite and model re-analysis has the potential to identify cases when remote sources of aerosols and pollution are advected into the boundary layer with impacts to the surface air quality. To facilitate this research we have developed the capability to combine ten-minute geostationary imagery from Himawari-8, nearby radiosondes, model output, surface PM measurements, and AERONET data over the HSRL site. On a case-by-case basis, it is possible to separate layers of aerosols with different scattering properties using these tools. Additionally, a preliminary year-long aerosol climatology with integrated geo-stationary retrievals and modeling data will be presented. The focus is on investigating correlations between the HSRL aerosol measurements (depolarization, color ratio, extinction, and lidar ratio) with the model output and aerosol sources. This analysis will use recently developed algorithms that automate the HSRL cloud and aerosol masking, providing the capability to characterize the seasonal changes in aerosol radiative properties and supplement the month-long field campaign with almost two years of continuous HSRL observations.

  19. Prediction of Acoustic Loads Generated by Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Perez, Linamaria; Allgood, Daniel C.

    2011-01-01

    NASA Stennis Space Center is one of the nation's premier facilities for conducting large-scale rocket engine testing. As liquid rocket engines vary in size, so do the acoustic loads that they produce. When these acoustic loads reach very high levels they may cause damages both to humans and to actual structures surrounding the testing area. To prevent these damages, prediction tools are used to estimate the spectral content and levels of the acoustics being generated by the rocket engine plumes and model their propagation through the surrounding atmosphere. Prior to the current work, two different acoustic prediction tools were being implemented at Stennis Space Center, each having their own advantages and disadvantages depending on the application. Therefore, a new prediction tool was created, using NASA SP-8072 handbook as a guide, which would replicate the same prediction methods as the previous codes, but eliminate any of the drawbacks the individual codes had. Aside from replicating the previous modeling capability in a single framework, additional modeling functions were added thereby expanding the current modeling capability. To verify that the new code could reproduce the same predictions as the previous codes, two verification test cases were defined. These verification test cases also served as validation cases as the predicted results were compared to actual test data.

  20. Nuclear Engine System Simulation (NESS). Version 2.0: Program user's guide

    NASA Technical Reports Server (NTRS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman

    1993-01-01

    This Program User's Guide discusses the Nuclear Thermal Propulsion (NTP) engine system design features and capabilities modeled in the Nuclear Engine System Simulation (NESS): Version 2.0 program (referred to as NESS throughout the remainder of this document), as well as its operation. NESS was upgraded to include many new modeling capabilities not available in the original version delivered to NASA LeRC in Dec. 1991, NESS's new features include the following: (1) an improved input format; (2) an advanced solid-core NERVA-type reactor system model (ENABLER 2); (3) a bleed-cycle engine system option; (4) an axial-turbopump design option; (5) an automated pump-out turbopump assembly sizing option; (6) an off-design gas generator engine cycle design option; (7) updated hydrogen properties; (8) an improved output format; and (9) personal computer operation capability. Sample design cases are presented in the user's guide that demonstrate many of the new features associated with this upgraded version of NESS, as well as design modeling features associated with the original version of NESS.

  1. Comparison of CFD simulations with experimental data for a tanker model advancing in waves

    NASA Astrophysics Data System (ADS)

    Orihara, Hideo

    2011-03-01

    In this paper, CFD simulation results for a tanker model are compared with experimental data over a range of wave conditions to verify a capability to predict the sea-keeping performance of practical hull forms. CFD simulations are conducted using WISDAM-X code which is capable of unsteady RANS calculations in arbitrary wave conditions. Comparisons are made of unsteady surface pressures, added resistance and ship motions in regular waves for cases of fully-loaded and ballast conditions of a large tanker model. It is shown that the simulation results agree fairly well with the experimental data, and that WISDAM-X code can predict sea-keeping performance of practical hull forms.

  2. Issues in knowledge representation to support maintainability: A case study in scientific data preparation

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Kandt, R. Kirk; Roden, Joseph; Burleigh, Scott; King, Todd; Joy, Steve

    1992-01-01

    Scientific data preparation is the process of extracting usable scientific data from raw instrument data. This task involves noise detection (and subsequent noise classification and flagging or removal), extracting data from compressed forms, and construction of derivative or aggregate data (e.g. spectral densities or running averages). A software system called PIPE provides intelligent assistance to users developing scientific data preparation plans using a programming language called Master Plumber. PIPE provides this assistance capability by using a process description to create a dependency model of the scientific data preparation plan. This dependency model can then be used to verify syntactic and semantic constraints on processing steps to perform limited plan validation. PIPE also provides capabilities for using this model to assist in debugging faulty data preparation plans. In this case, the process model is used to focus the developer's attention upon those processing steps and data elements that were used in computing the faulty output values. Finally, the dependency model of a plan can be used to perform plan optimization and runtime estimation. These capabilities allow scientists to spend less time developing data preparation procedures and more time on scientific analysis tasks. Because the scientific data processing modules (called fittings) evolve to match scientists' needs, issues regarding maintainability are of prime importance in PIPE. This paper describes the PIPE system and describes how issues in maintainability affected the knowledge representation used in PIPE to capture knowledge about the behavior of fittings.

  3. Unified Deep Learning Architecture for Modeling Biology Sequence.

    PubMed

    Wu, Hongjie; Cao, Chengyuan; Xia, Xiaoyan; Lu, Qiang

    2017-10-09

    Prediction of the spatial structure or function of biological macromolecules based on their sequence remains an important challenge in bioinformatics. When modeling biological sequences using traditional sequencing models, characteristics, such as long-range interactions between basic units, the complicated and variable output of labeled structures, and the variable length of biological sequences, usually lead to different solutions on a case-by-case basis. This study proposed the use of bidirectional recurrent neural networks based on long short-term memory or a gated recurrent unit to capture long-range interactions by designing the optional reshape operator to adapt to the diversity of the output labels and implementing a training algorithm to support the training of sequence models capable of processing variable-length sequences. Additionally, the merge and pooling operators enhanced the ability to capture short-range interactions between basic units of biological sequences. The proposed deep-learning model and its training algorithm might be capable of solving currently known biological sequence-modeling problems through the use of a unified framework. We validated our model on one of the most difficult biological sequence-modeling problems currently known, with our results indicating the ability of the model to obtain predictions of protein residue interactions that exceeded the accuracy of current popular approaches by 10% based on multiple benchmarks.

  4. A Game-Theoretical Model to Improve Process Plant Protection from Terrorist Attacks.

    PubMed

    Zhang, Laobing; Reniers, Genserik

    2016-12-01

    The New York City 9/11 terrorist attacks urged people from academia as well as from industry to pay more attention to operational security research. The required focus in this type of research is human intention. Unlike safety-related accidents, security-related accidents have a deliberate nature, and one has to face intelligent adversaries with characteristics that traditional probabilistic risk assessment techniques are not capable of dealing with. In recent years, the mathematical tool of game theory, being capable to handle intelligent players, has been used in a variety of ways in terrorism risk assessment. In this article, we analyze the general intrusion detection system in process plants, and propose a game-theoretical model for security management in such plants. Players in our model are assumed to be rational and they play the game with complete information. Both the pure strategy and the mixed strategy solutions are explored and explained. We illustrate our model by an illustrative case, and find that in our case, no pure strategy but, instead, a mixed strategy Nash equilibrium exists. © 2016 Society for Risk Analysis.

  5. Second-Moment RANS Model Verification and Validation Using the Turbulence Modeling Resource Website (Invited)

    NASA Technical Reports Server (NTRS)

    Eisfeld, Bernhard; Rumsey, Chris; Togiti, Vamshi

    2015-01-01

    The implementation of the SSG/LRR-omega differential Reynolds stress model into the NASA flow solvers CFL3D and FUN3D and the DLR flow solver TAU is verified by studying the grid convergence of the solution of three different test cases from the Turbulence Modeling Resource Website. The model's predictive capabilities are assessed based on four basic and four extended validation cases also provided on this website, involving attached and separated boundary layer flows, effects of streamline curvature and secondary flow. Simulation results are compared against experimental data and predictions by the eddy-viscosity models of Spalart-Allmaras (SA) and Menter's Shear Stress Transport (SST).

  6. Comparison of BrainTool to other UML modeling and model transformation tools

    NASA Astrophysics Data System (ADS)

    Nikiforova, Oksana; Gusarovs, Konstantins

    2017-07-01

    In the last 30 years there were numerous model generated software systems offered targeting problems with the development productivity and the resulting software quality. CASE tools developed due today's date are being advertised as having "complete code-generation capabilities". Nowadays the Object Management Group (OMG) is calling similar arguments in regards to the Unified Modeling Language (UML) models at different levels of abstraction. It is being said that software development automation using CASE tools enables significant level of automation. Actual today's CASE tools are usually offering a combination of several features starting with a model editor and a model repository for a traditional ones and ending with code generator (that could be using a scripting or domain-specific (DSL) language), transformation tool to produce the new artifacts from the manually created and transformation definition editor to define new transformations for the most advanced ones. Present paper contains the results of CASE tool (mainly UML editors) comparison against the level of the automation they are offering.

  7. SHARP Multiphysics Tutorials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y. Q.; Shemon, E. R.; Mahadevan, Vijay S.

    SHARP, developed under the NEAMS Reactor Product Line, is an advanced modeling and simulation toolkit for the analysis of advanced nuclear reactors. SHARP is comprised of three physics modules currently including neutronics, thermal hydraulics, and structural mechanics. SHARP empowers designers to produce accurate results for modeling physical phenomena that have been identified as important for nuclear reactor analysis. SHARP can use existing physics codes and take advantage of existing infrastructure capabilities in the MOAB framework and the coupling driver/solver library, the Coupled Physics Environment (CouPE), which utilizes the widely used, scalable PETSc library. This report aims at identifying the coupled-physicsmore » simulation capability of SHARP by introducing the demonstration example called sahex in advance of the SHARP release expected by Mar 2016. sahex consists of 6 fuel pins with cladding, 1 control rod, sodium coolant and an outer duct wall that encloses all the other components. This example is carefully chosen to demonstrate the proof of concept for solving more complex demonstration examples such as EBR II assembly and ABTR full core. The workflow of preparing the input files, running the case and analyzing the results is demonstrated in this report. Moreover, an extension of the sahex model called sahex_core, which adds six homogenized neighboring assemblies to the full heterogeneous sahex model, is presented to test homogenization capabilities in both Nek5000 and PROTEUS. Some primary information on the configuration and build aspects for the SHARP toolkit, which includes capability to auto-download dependencies and configure/install with optimal flags in an architecture-aware fashion, is also covered by this report. A step-by-step instruction is provided to help users to create their cases. Details on these processes will be provided in the SHARP user manual that will accompany the first release.« less

  8. Research on the tourism resource development from the perspective of network capability-Taking Wuxi Huishan Ancient Town as an example

    NASA Astrophysics Data System (ADS)

    Bao, Yanli; Hua, Hefeng

    2017-03-01

    Network capability is the enterprise's capability to set up, manage, maintain and use a variety of relations between enterprises, and to obtain resources for improving competitiveness. Tourism in China is in a transformation period from sightseeing to leisure and vacation. Scenic spots as well as tourist enterprises can learn from some other enterprises in the process of resource development, and build up its own network relations in order to get resources for their survival and development. Through the effective management of network relations, the performance of resource development will be improved. By analyzing literature on network capability and the case analysis of Wuxi Huishan Ancient Town, the role of network capacity in the tourism resource development is explored and resource development path is built from the perspective of network capability. Finally, the tourism resource development process model based on network capacity is proposed. This model mainly includes setting up network vision, resource identification, resource acquisition, resource utilization and tourism project development. In these steps, network construction, network management and improving network center status are key points.

  9. Large Eddy Simulation Modeling of Flashback and Flame Stabilization in Hydrogen-Rich Gas Turbines Using a Hierarchical Validation Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clemens, Noel

    This project was a combined computational and experimental effort to improve predictive capability for boundary layer flashback of premixed swirl flames relevant to gas-turbine power plants operating with high-hydrogen-content fuels. During the course of this project, significant progress in modeling was made on four major fronts: 1) use of direct numerical simulation of turbulent flames to understand the coupling between the flame and the turbulent boundary layer; 2) improved modeling capability for flame propagation in stratified pre-mixtures; 3) improved portability of computer codes using the OpenFOAM platform to facilitate transfer to industry and other researchers; and 4) application of LESmore » to flashback in swirl combustors, and a detailed assessment of its capabilities and limitations for predictive purposes. A major component of the project was an experimental program that focused on developing a rich experimental database of boundary layer flashback in swirl flames. Both methane and high-hydrogen fuels, including effects of elevated pressure (1 to 5 atm), were explored. For this project, a new model swirl combustor was developed. Kilohertz-rate stereoscopic PIV and chemiluminescence imaging were used to investigate the flame propagation dynamics. In addition to the planar measurements, a technique capable of detecting the instantaneous, time-resolved 3D flame front topography was developed and applied successfully to investigate the flow-flame interaction. The UT measurements and legacy data were used in a hierarchical validation approach where flows with increasingly complex physics were used for validation. First component models were validated with DNS and literature data in simplified configurations, and this was followed by validation with the UT 1-atm flashback cases, and then the UT high-pressure flashback cases. The new models and portable code represent a major improvement over what was available before this project was initiated.« less

  10. Stakeholder approach for evaluating organizational change projects.

    PubMed

    Peltokorpi, Antti; Alho, Antti; Kujala, Jaakko; Aitamurto, Johanna; Parvinen, Petri

    2008-01-01

    This paper aims to create a model for evaluating organizational change initiatives from a stakeholder resistance viewpoint. The paper presents a model to evaluate change projects and their expected benefits. Factors affecting the challenge to implement change were defined based on stakeholder theory literature. The authors test the model's practical validity for screening change initiatives to improve operating room productivity. Change initiatives can be evaluated using six factors: the effect of the planned intervention on stakeholders' actions and position; stakeholders' capability to influence the project's implementation; motivation to participate; capability to change; change complexity; and management capability. The presented model's generalizability should be explored by filtering presented factors through a larger number of historical cases operating in different healthcare contexts. The link between stakeholders, the change challenge and the outcomes of change projects needs to be empirically tested. The proposed model can be used to prioritize change projects, manage stakeholder resistance and establish a better organizational and professional competence for managing healthcare organization change projects. New insights into existing stakeholder-related understanding of change project successes are provided.

  11. A Learning Framework for Control-Oriented Modeling of Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubio-Herrero, Javier; Chandan, Vikas; Siegel, Charles M.

    Buildings consume a significant amount of energy worldwide. Several building optimization and control use cases require models of energy consumption which are control oriented, have high predictive capability, imposes minimal data pre-processing requirements, and have the ability to be adapted continuously to account for changing conditions as new data becomes available. Data driven modeling techniques, that have been investigated so far, while promising in the context of buildings, have been unable to simultaneously satisfy all the requirements mentioned above. In this context, deep learning techniques such as Recurrent Neural Networks (RNNs) hold promise, empowered by advanced computational capabilities and bigmore » data opportunities. In this paper, we propose a deep learning based methodology for the development of control oriented models for building energy management and test in on data from a real building. Results show that the proposed methodology outperforms other data driven modeling techniques significantly. We perform a detailed analysis of the proposed methodology along dimensions such as topology, sensitivity, and downsampling. Lastly, we conclude by envisioning a building analytics suite empowered by the proposed deep framework, that can drive several use cases related to building energy management.« less

  12. Performability modeling with continuous accomplishment sets

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1979-01-01

    A general modeling framework that permits the definition, formulation, and evaluation of performability is described. It is shown that performability relates directly to system effectiveness, and is a proper generalization of both performance and reliability. A hierarchical modeling scheme is used to formulate the capability function used to evaluate performability. The case in which performance variables take values in a continuous accomplishment set is treated explicitly.

  13. Optimization of seasonal ARIMA models using differential evolution - simulated annealing (DESA) algorithm in forecasting dengue cases in Baguio City

    NASA Astrophysics Data System (ADS)

    Addawe, Rizavel C.; Addawe, Joel M.; Magadia, Joselito C.

    2016-10-01

    Accurate forecasting of dengue cases would significantly improve epidemic prevention and control capabilities. This paper attempts to provide useful models in forecasting dengue epidemic specific to the young and adult population of Baguio City. To capture the seasonal variations in dengue incidence, this paper develops a robust modeling approach to identify and estimate seasonal autoregressive integrated moving average (SARIMA) models in the presence of additive outliers. Since the least squares estimators are not robust in the presence of outliers, we suggest a robust estimation based on winsorized and reweighted least squares estimators. A hybrid algorithm, Differential Evolution - Simulated Annealing (DESA), is used to identify and estimate the parameters of the optimal SARIMA model. The method is applied to the monthly reported dengue cases in Baguio City, Philippines.

  14. A cellular automata model for traffic flow based on kinetics theory, vehicles capabilities and driver reactions

    NASA Astrophysics Data System (ADS)

    Guzmán, H. A.; Lárraga, M. E.; Alvarez-Icaza, L.; Carvajal, J.

    2018-02-01

    In this paper, a reliable cellular automata model oriented to faithfully reproduce deceleration and acceleration according to realistic reactions of drivers, when vehicles with different deceleration capabilities are considered is presented. The model focuses on describing complex traffic phenomena by coding in its rules the basic mechanisms of drivers behavior, vehicles capabilities and kinetics, while preserving simplicity. In particular, vehiclés kinetics is based on uniform accelerated motion, rather than in impulsive accelerated motion as in most existing CA models. Thus, the proposed model calculates in an analytic way three safe preserving distances to determine the best action a follower vehicle can take under a worst case scenario. Besides, the prediction analysis guarantees that under the proper assumptions, collision between vehicles may not happen at any future time. Simulations results indicate that all interactions of heterogeneous vehicles (i.e., car-truck, truck-car, car-car and truck-truck) are properly reproduced by the model. In addition, the model overcomes one of the major limitations of CA models for traffic modeling: the inability to perform smooth approach to slower or stopped vehicles. Moreover, the model is also capable of reproducing most empirical findings including the backward speed of the downstream front of the traffic jam, and different congested traffic patterns induced by a system with open boundary conditions with an on-ramp. Like most CA models, integer values are used to make the model run faster, which makes the proposed model suitable for real time traffic simulation of large networks.

  15. Rapid Operational Access and Maneuver Support (ROAMS) Platform for Improved Military Logistics Lines of Communication and Operational Vessel Routing

    DTIC Science & Technology

    2017-06-01

    case study in a northeastern American metropolitan area. METHODOLOGY : The ROAMS platform provides expanded analysis, model automation, and enhanced...shoals. An initial route for such operations is selected much like the military logistics case . Subsequent adjustments to routes may be done on an ad...IX-45 June 2017 8 CASE STUDY: The ROAMS platform was applied to a large, northeast American metropolitan region to demonstrate the capability of

  16. Unsteady transonic potential flow over a flexible fuselage

    NASA Technical Reports Server (NTRS)

    Gibbons, Michael D.

    1993-01-01

    A flexible fuselage capability has been developed and implemented within version 1.2 of the CAP-TSD code. The capability required adding time dependent terms to the fuselage surface boundary conditions and the fuselage surface pressure coefficient. The new capability will allow modeling the effect of a flexible fuselage on the aeroelastic stability of complex configurations. To assess the flexible fuselage capability several steady and unsteady calculations have been performed for slender fuselages with circular cross-sections. Steady surface pressures are compared with experiment at transonic flight conditions. Unsteady cross-sectional lift is compared with other analytical results at a low subsonic speed and a transonic case has been computed. The comparisons demonstrate the accuracy of the flexible fuselage modifications.

  17. GSTARS computer models and their applications, Part II: Applications

    USGS Publications Warehouse

    Simoes, F.J.M.; Yang, C.T.

    2008-01-01

    In part 1 of this two-paper series, a brief summary of the basic concepts and theories used in developing the Generalized Stream Tube model for Alluvial River Simulation (GSTARS) computer models was presented. Part 2 provides examples that illustrate some of the capabilities of the GSTARS models and how they can be applied to solve a wide range of river and reservoir sedimentation problems. Laboratory and field case studies are used and the examples show representative applications of the earlier and of the more recent versions of GSTARS. Some of the more recent capabilities implemented in GSTARS3, one of the latest versions of the series, are also discussed here with more detail. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  18. Automated Test Case Generation for an Autopilot Requirement Prototype

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Rungta, Neha; Feary, Michael

    2011-01-01

    Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.

  19. Numerical Investigation of Flapwise-Torsional Vibration Model of a Smart Section Blade with Microtab

    DOE PAGES

    Li, Nailu; Balas, Mark J.; Yang, Hua; ...

    2015-01-01

    This paper presents a method to develop an aeroelastic model of a smart section blade equipped with microtab. The model is suitable for potential passive vibration control study of the blade section in classic flutter. Equations of the model are described by the nondimensional flapwise and torsional vibration modes coupled with the aerodynamic model based on the Theodorsen theory and aerodynamic effects of the microtab based on the wind tunnel experimental data. The aeroelastic model is validated using numerical data available in the literature and then utilized to analyze the microtab control capability on flutter instability case and divergence instabilitymore » case. The effectiveness of the microtab is investigated with the scenarios of different output controllers and actuation deployments for both instability cases. The numerical results show that the microtab can effectively suppress both vibration modes with the appropriate choice of the output feedback controller.« less

  20. Coupled incompressible Smoothed Particle Hydrodynamics model for continuum-based modelling sediment transport

    NASA Astrophysics Data System (ADS)

    Pahar, Gourabananda; Dhar, Anirban

    2017-04-01

    A coupled solenoidal Incompressible Smoothed Particle Hydrodynamics (ISPH) model is presented for simulation of sediment displacement in erodible bed. The coupled framework consists of two separate incompressible modules: (a) granular module, (b) fluid module. The granular module considers a friction based rheology model to calculate deviatoric stress components from pressure. The module is validated for Bagnold flow profile and two standardized test cases of sediment avalanching. The fluid module resolves fluid flow inside and outside porous domain. An interaction force pair containing fluid pressure, viscous term and drag force acts as a bridge between two different flow modules. The coupled model is validated against three dambreak flow cases with different initial conditions of movable bed. The simulated results are in good agreement with experimental data. A demonstrative case considering effect of granular column failure under full/partial submergence highlights the capability of the coupled model for application in generalized scenario.

  1. Numerical Investigation of Flapwise-Torsional Vibration Model of a Smart Section Blade with Microtab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Nailu; Balas, Mark J.; Yang, Hua

    2015-01-01

    This study presents a method to develop an aeroelastic model of a smart section blade equipped with microtab. The model is suitable for potential passive vibration control study of the blade section in classic flutter. Equations of the model are described by the nondimensional flapwise and torsional vibration modes coupled with the aerodynamic model based on the Theodorsen theory and aerodynamic effects of the microtab based on the wind tunnel experimental data. The aeroelastic model is validated using numerical data available in the literature and then utilized to analyze the microtab control capability on flutter instability case and divergence instabilitymore » case. The effectiveness of the microtab is investigated with the scenarios of different output controllers and actuation deployments for both instability cases. The numerical results show that the microtab can effectively suppress both vibration modes with the appropriate choice of the output feedback controller.« less

  2. Advanced Booster Composite Case/Polybenzimidazole Nitrile Butadiene Rubber Insulation Development

    NASA Technical Reports Server (NTRS)

    Gentz, Steve; Taylor, Robert; Nettles, Mindy

    2015-01-01

    The NASA Engineering and Safety Center (NESC) was requested to examine processing sensitivities (e.g., cure temperature control/variance, debonds, density variations) of polybenzimidazole nitrile butadiene rubber (PBI-NBR) insulation, case fiber, and resin systems and to evaluate nondestructive evaluation (NDE) and damage tolerance methods/models required to support human-rated composite motor cases. The proposed use of composite motor cases in Blocks IA and II was expected to increase performance capability through optimizing operating pressure and increasing propellant mass fraction. This assessment was to support the evaluation of risk reduction for large booster component development/fabrication, NDE of low mass-to-strength ratio material structures, and solid booster propellant formulation as requested in the Space Launch System NASA Research Announcement for Advanced Booster Engineering Demonstration and/or Risk Reduction. Composite case materials and high-energy propellants represent an enabling capability in the Agency's ability to provide affordable, high-performing advanced booster concepts. The NESC team was requested to provide an assessment of co- and multiple-cure processing of composite case and PBI-NBR insulation materials and evaluation of high-energy propellant formulations.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadgu, Teklu; Appel, Gordon John

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less

  4. Models of Conflict, with Explicit Representation of Command and Control Capabilities and Vulnerabilities.

    DTIC Science & Technology

    1981-02-01

    converting Bu’s to Bal s Case 2: Pa,cc (t) = 0.25 + 0.075t = PaA(t) =1.0 In this case a space (and hence time) varying representation of the attrition rate...34 a Attrition rates can be made time ( space ) dependent. * Note that the attrition law is assumed for illustration only to be in accordance with a...34 Systems Reserach Lab., Dept. of Industrial Eng., University of Michigan. Gaver, D.P. and Tonguc K. (1979) "Modelling the influnece of information on

  5. Hydrologic modeling for water resource assessment in a developing country: the Rwanda case study

    Treesearch

    Steve McNulty; Erika Cohen Mack; Ge Sun; Peter Caldwell

    2016-01-01

    Accurate water resources assessment using hydrologic models can be a challenge anywhere, but particularly for developing countries with limited financial and technical resources. Developing countries could most benefit from the water resource planning capabilities that hydrologic models can provide, but these countries are least likely to have the data needed to run ...

  6. Neural network for processing both spatial and temporal data with time based back-propagation

    NASA Technical Reports Server (NTRS)

    Villarreal, James A. (Inventor); Shelton, Robert O. (Inventor)

    1993-01-01

    Neural networks are computing systems modeled after the paradigm of the biological brain. For years, researchers using various forms of neural networks have attempted to model the brain's information processing and decision-making capabilities. Neural network algorithms have impressively demonstrated the capability of modeling spatial information. On the other hand, the application of parallel distributed models to the processing of temporal data has been severely restricted. The invention introduces a novel technique which adds the dimension of time to the well known back-propagation neural network algorithm. In the space-time neural network disclosed herein, the synaptic weights between two artificial neurons (processing elements) are replaced with an adaptable-adjustable filter. Instead of a single synaptic weight, the invention provides a plurality of weights representing not only association, but also temporal dependencies. In this case, the synaptic weights are the coefficients to the adaptable digital filters. Novelty is believed to lie in the disclosure of a processing element and a network of the processing elements which are capable of processing temporal as well as spacial data.

  7. Finite Element Vibration Modeling and Experimental Validation for an Aircraft Engine Casing

    NASA Astrophysics Data System (ADS)

    Rabbitt, Christopher

    This thesis presents a procedure for the development and validation of a theoretical vibration model, applies this procedure to a pair of aircraft engine casings, and compares select parameters from experimental testing of those casings to those from a theoretical model using the Modal Assurance Criterion (MAC) and linear regression coefficients. A novel method of determining the optimal MAC between axisymmetric results is developed and employed. It is concluded that the dynamic finite element models developed as part of this research are fully capable of modelling the modal parameters within the frequency range of interest. Confidence intervals calculated in this research for correlation coefficients provide important information regarding the reliability of predictions, and it is recommended that these intervals be calculated for all comparable coefficients. The procedure outlined for aligning mode shapes around an axis of symmetry proved useful, and the results are promising for the development of further optimization techniques.

  8. Numerical simulation of in-situ chemical oxidation (ISCO) and biodegradation of petroleum hydrocarbons using a coupled model for bio-geochemical reactive transport

    NASA Astrophysics Data System (ADS)

    Marin, I. S.; Molson, J. W.

    2013-05-01

    Petroleum hydrocarbons (PHCs) are a major source of groundwater contamination, being a worldwide and well-known problem. Formed by a complex mixture of hundreds of organic compounds (including BTEX - benzene, toluene, ethylbenzene and xylenes), many of which are toxic and persistent in the subsurface and are capable of creating a serious risk to human health. Several remediation technologies can be used to clean-up PHC contamination. In-situ chemical oxidation (ISCO) and intrinsic bioremediation (IBR) are two promising techniques that can be applied in this case. However, the interaction of these processes with the background aquifer geochemistry and the design of an efficient treatment presents a challenge. Here we show the development and application of BIONAPL/Phreeqc, a modeling tool capable of simulating groundwater flow, contaminant transport with coupled biological and geochemical processes in porous or fractured porous media. BIONAPL/Phreeqc is based on the well-tested BIONAPL/3D model, using a powerful finite element simulation engine, capable of simulating non-aqueous phase liquid (NAPL) dissolution, density-dependent advective-dispersive transport, and solving the geochemical and kinetic processes with the library Phreeqc. To validate the model, we compared BIONAPL/Phreeqc with results from the literature for different biodegradation processes and different geometries, with good agreement. We then used the model to simulate the behavior of sodium persulfate (NaS2O8) as an oxidant for BTEX degradation, coupled with sequential biodegradation in a 2D case and to evaluate the effect of inorganic geochemistry reactions. The results show the advantages of a treatment train remediation scheme based on ISCO and IBR. The numerical performance and stability of the integrated BIONAPL/Phreeqc model was also verified.

  9. Modeling the internal combustion engine

    NASA Technical Reports Server (NTRS)

    Zeleznik, F. J.; Mcbride, B. J.

    1985-01-01

    A flexible and computationally economical model of the internal combustion engine was developed for use on large digital computer systems. It is based on a system of ordinary differential equations for cylinder-averaged properties. The computer program is capable of multicycle calculations, with some parameters varying from cycle to cycle, and has restart capabilities. It can accommodate a broad spectrum of reactants, permits changes in physical properties, and offers a wide selection of alternative modeling functions without any reprogramming. It readily adapts to the amount of information available in a particular case because the model is in fact a hierarchy of five models. The models range from a simple model requiring only thermodynamic properties to a complex model demanding full combustion kinetics, transport properties, and poppet valve flow characteristics. Among its many features the model includes heat transfer, valve timing, supercharging, motoring, finite burning rates, cycle-to-cycle variations in air-fuel ratio, humid air, residual and recirculated exhaust gas, and full combustion kinetics.

  10. Collaborative testing of turbulence models

    NASA Astrophysics Data System (ADS)

    Bradshaw, P.

    1992-12-01

    This project, funded by AFOSR, ARO, NASA, and ONR, was run by the writer with Profs. Brian E. Launder, University of Manchester, England, and John L. Lumley, Cornell University. Statistical data on turbulent flows, from lab. experiments and simulations, were circulated to modelers throughout the world. This is the first large-scale project of its kind to use simulation data. The modelers returned their predictions to Stanford, for distribution to all modelers and to additional participants ('experimenters')--over 100 in all. The object was to obtain a consensus on the capabilities of present-day turbulence models and identify which types most deserve future support. This was not completely achieved, mainly because not enough modelers could produce results for enough test cases within the duration of the project. However, a clear picture of the capabilities of various modeling groups has appeared, and the interaction has been helpful to the modelers. The results support the view that Reynolds-stress transport models are the most accurate.

  11. Leadership Development: A Senior Leader Case Study

    DTIC Science & Technology

    2014-10-01

    LIFE model Element Investigative Question Strategy How does (development program) posture (or fail to posture ) leaders to meet organizational...Management How does (development program) adequately posture (or fail to posture ) officer talent capable of filling talent gaps within the...LIFE model in figure 1 stems from conceptualizing and integrat- ing elements of leadership development in the work of Stephen Co- hen , Lisa Gabel

  12. AN-CASE NET-CENTRIC modeling and simulation

    NASA Astrophysics Data System (ADS)

    Baskinger, Patricia J.; Chruscicki, Mary Carol; Turck, Kurt

    2009-05-01

    The objective of mission training exercises is to immerse the trainees into an environment that enables them to train like they would fight. The integration of modeling and simulation environments that can seamlessly leverage Live systems, and Virtual or Constructive models (LVC) as they are available offers a flexible and cost effective solution to extending the "war-gaming" environment to a realistic mission experience while evolving the development of the net-centric enterprise. From concept to full production, the impact of new capabilities on the infrastructure and concept of operations, can be assessed in the context of the enterprise, while also exposing them to the warfighter. Training is extended to tomorrow's tools, processes, and Tactics, Techniques and Procedures (TTPs). This paper addresses the challenges of a net-centric modeling and simulation environment that is capable of representing a net-centric enterprise. An overview of the Air Force Research Laboratory's (AFRL) Airborne Networking Component Architecture Simulation Environment (AN-CASE) is provide as well as a discussion on how it is being used to assess technologies for the purpose of experimenting with new infrastructure mechanisms that enhance the scalability and reliability of the distributed mission operations environment.

  13. Assessments of a Turbulence Model Based on Menter's Modification to Rotta's Two-Equation Model

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.

    2013-01-01

    The main objective of this paper is to construct a turbulence model with a more reliable second equation simulating length scale. In the present paper, we assess the length scale equation based on Menter s modification to Rotta s two-equation model. Rotta shows that a reliable second equation can be formed in an exact transport equation from the turbulent length scale L and kinetic energy. Rotta s equation is well suited for a term-by-term modeling and shows some interesting features compared to other approaches. The most important difference is that the formulation leads to a natural inclusion of higher order velocity derivatives into the source terms of the scale equation, which has the potential to enhance the capability of Reynolds-averaged Navier-Stokes (RANS) to simulate unsteady flows. The model is implemented in the PAB3D solver with complete formulation, usage methodology, and validation examples to demonstrate its capabilities. The detailed studies include grid convergence. Near-wall and shear flows cases are documented and compared with experimental and Large Eddy Simulation (LES) data. The results from this formulation are as good or better than the well-known SST turbulence model and much better than k-epsilon results. Overall, the study provides useful insights into the model capability in predicting attached and separated flows.

  14. Model-Based Engineering Design for Trade Space Exploration throughout the Design Cycle

    NASA Technical Reports Server (NTRS)

    Lamassoure, Elisabeth S.; Wall, Stephen D.; Easter, Robert W.

    2004-01-01

    This paper presents ongoing work to standardize model-based system engineering as a complement to point design development in the conceptual design phase of deep space missions. It summarizes two first steps towards practical application of this capability within the framework of concurrent engineering design teams and their customers. The first step is standard generation of system sensitivities models as the output of concurrent engineering design sessions, representing the local trade space around a point design. A review of the chosen model development process, and the results of three case study examples, demonstrate that a simple update to the concurrent engineering design process can easily capture sensitivities to key requirements. It can serve as a valuable tool to analyze design drivers and uncover breakpoints in the design. The second step is development of rough-order- of-magnitude, broad-range-of-validity design models for rapid exploration of the trade space, before selection of a point design. At least one case study demonstrated the feasibility to generate such models in a concurrent engineering session. The experiment indicated that such a capability could yield valid system-level conclusions for a trade space composed of understood elements. Ongoing efforts are assessing the practicality of developing end-to-end system-level design models for use before even convening the first concurrent engineering session, starting with modeling an end-to-end Mars architecture.

  15. Introducing WISDEM:An Integrated System Modeling for Wind Turbines and Plant (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykes, K.; Graf, P.; Scott, G.

    2015-01-01

    The National Wind Technology Center wind energy systems engineering initiative has developed an analysis platform to leverage its research capabilities toward integrating wind energy engineering and cost models across wind plants. This Wind-Plant Integrated System Design & Engineering Model (WISDEM) platform captures the important interactions between various subsystems to achieve a better National Wind Technology Center wind energy systems engineering initiative has developed an analysis platform to leverage its research capabilities toward integrating wind energy engineering and cost models across wind plants. This Wind-Plant Integrated System Design & Engineering Model (WISDEM) platform captures the important interactions between various subsystems tomore » achieve a better understanding of how to improve system-level performance and achieve system-level cost reductions. This work illustrates a few case studies with WISDEM that focus on the design and analysis of wind turbines and plants at different system levels.« less

  16. Model-based Assessment for Balancing Privacy Requirements and Operational Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knirsch, Fabian; Engel, Dominik; Frincu, Marc

    2015-02-17

    The smart grid changes the way energy is produced and distributed. In addition both, energy and information is exchanged bidirectionally among participating parties. Therefore heterogeneous systems have to cooperate effectively in order to achieve a common high-level use case, such as smart metering for billing or demand response for load curtailment. Furthermore, a substantial amount of personal data is often needed for achieving that goal. Capturing and processing personal data in the smart grid increases customer concerns about privacy and in addition, certain statutory and operational requirements regarding privacy aware data processing and storage have to be met. An increasemore » of privacy constraints, however, often limits the operational capabilities of the system. In this paper, we present an approach that automates the process of finding an optimal balance between privacy requirements and operational requirements in a smart grid use case and application scenario. This is achieved by formally describing use cases in an abstract model and by finding an algorithm that determines the optimum balance by forward mapping privacy and operational impacts. For this optimal balancing algorithm both, a numeric approximation and – if feasible – an analytic assessment are presented and investigated. The system is evaluated by applying the tool to a real-world use case from the University of Southern California (USC) microgrid.« less

  17. Improving Coastal Ocean Color Validation Capabilities through Application of Inherent Optical Properties (IOPs)

    NASA Technical Reports Server (NTRS)

    Mannino, Antonio

    2008-01-01

    Understanding how the different components of seawater alter the path of incident sunlight through scattering and absorption is essential to using remotely sensed ocean color observations effectively. This is particularly apropos in coastal waters where the different optically significant components (phytoplankton, detrital material, inorganic minerals, etc.) vary widely in concentration, often independently from one another. Inherent Optical Properties (IOPs) form the link between these biogeochemical constituents and the Apparent Optical Properties (AOPs). understanding this interrelationship is at the heart of successfully carrying out inversions of satellite-measured radiance to biogeochemical properties. While sufficient covariation of seawater constituents in case I waters typically allows empirical algorithms connecting AOPs and biogeochemical parameters to behave well, these empirical algorithms normally do not hold for case I1 regimes (Carder et al. 2003). Validation in the context of ocean color remote sensing refers to in-situ measurements used to verify or characterize algorithm products or any assumption used as input to an algorithm. In this project, validation capabilities are considered those measurement capabilities, techniques, methods, models, etc. that allow effective validation. Enhancing current validation capabilities by incorporating state-of-the-art IOP measurements and optical models is the purpose of this work. Involved in this pursuit is improving core IOP measurement capabilities (spectral, angular, spatio-temporal resolutions), improving our understanding of the behavior of analytical AOP-IOP approximations in complex coastal waters, and improving the spatial and temporal resolution of biogeochemical data for validation by applying biogeochemical-IOP inversion models so that these parameters can be computed from real-time IOP sensors with high sampling rates. Research cruises supported by this project provides for collection and processing of seawater samples for biogeochemical (pigments, DOC and POC) and optical (CDOM and POM absorption coefficients) analyses to enhance our understanding of the linkages between in-water optical measurements (IOPs and AOPs) and biogeochemical constituents and to provide a more comprehensive suite of validation products.

  18. Case study applications of the BASINS climate assessment tool (CAT)

    EPA Science Inventory

    This EPA report will illustrate the application of different climate assessment capabilities within EPA’s BASINS modeling system for assessing a range of potential questions about the effects of climate change on streamflow and water quality in different watershed settings and us...

  19. Consequence and Resilience Modeling for Chemical Supply Chains

    NASA Technical Reports Server (NTRS)

    Stamber, Kevin L.; Vugrin, Eric D.; Ehlen, Mark A.; Sun, Amy C.; Warren, Drake E.; Welk, Margaret E.

    2011-01-01

    The U.S. chemical sector produces more than 70,000 chemicals that are essential material inputs to critical infrastructure systems, such as the energy, public health, and food and agriculture sectors. Disruptions to the chemical sector can potentially cascade to other dependent sectors, resulting in serious national consequences. To address this concern, the U.S. Department of Homeland Security (DHS) tasked Sandia National Laboratories to develop a predictive consequence modeling and simulation capability for global chemical supply chains. This paper describes that capability , which includes a dynamic supply chain simulation platform called N_ABLE(tm). The paper also presents results from a case study that simulates the consequences of a Gulf Coast hurricane on selected segments of the U.S. chemical sector. The case study identified consequences that include impacted chemical facilities, cascading impacts to other parts of the chemical sector. and estimates of the lengths of chemical shortages and recovery . Overall. these simulation results can DHS prepare for and respond to actual disruptions.

  20. Enteric disease episodes and the risk of acquiring a future sexually transmitted infection: a prediction model in Montreal residents.

    PubMed

    Caron, Melissa; Allard, Robert; Bédard, Lucie; Latreille, Jérôme; Buckeridge, David L

    2016-11-01

    The sexual transmission of enteric diseases poses an important public health challenge. We aimed to build a prediction model capable of identifying individuals with a reported enteric disease who could be at risk of acquiring future sexually transmitted infections (STIs). Passive surveillance data on Montreal residents with at least 1 enteric disease report was used to construct the prediction model. Cases were defined as all subjects with at least 1 STI report following their initial enteric disease episode. A final logistic regression prediction model was chosen using forward stepwise selection. The prediction model with the greatest validity included age, sex, residential location, number of STI episodes experienced prior to the first enteric disease episode, type of enteric disease acquired, and an interaction term between age and male sex. This model had an area under the curve of 0.77 and had acceptable calibration. A coordinated public health response to the sexual transmission of enteric diseases requires that a distinction be made between cases of enteric diseases transmitted through sexual activity from those transmitted through contaminated food or water. A prediction model can aid public health officials in identifying individuals who may have a higher risk of sexually acquiring a reportable disease. Once identified, these individuals could receive specialized intervention to prevent future infection. The information produced from a prediction model capable of identifying higher risk individuals can be used to guide efforts in investigating and controlling reported cases of enteric diseases and STIs. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Reexamining Computational Support for Intelligence Analysis: A Functional Design for a Future Capability

    DTIC Science & Technology

    2016-07-14

    applicability of the sensor model in the context under consideration. A similar information flow can be considered for obtaining direct reliability of an... Modeling , Bex Concepts Human Intelligence Simulation USE CASES Army: Opns in Megacities, Syrian Civil War Navy: Piracy (NATO, Book), Autonomous ISR...2007) 6 [25] Bex, F. and Verheij, B ., Story Schemes for Argumentation about the Facts of a Crime, Computational Models of Narrative: Papers from the

  2. Evaluation of the 29-km Eta Model for Weather Support to the United States Space Program

    NASA Technical Reports Server (NTRS)

    Manobianco, John; Nutter, Paul

    1997-01-01

    The Applied Meteorology Unit (AMU) conducted a year-long evaluation of NCEP's 29-km mesoscale Eta (meso-eta) weather prediction model in order to identify added value to forecast operations in support of the United States space program. The evaluation was stratified over warm and cool seasons and considered both objective and subjective verification methodologies. Objective verification results generally indicate that meso-eta model point forecasts at selected stations exhibit minimal error growth in terms of RMS errors and are reasonably unbiased. Conversely, results from the subjective verification demonstrate that model forecasts of developing weather events such as thunderstorms, sea breezes, and cold fronts, are not always as accurate as implied by the seasonal error statistics. Sea-breeze case studies reveal that the model generates a dynamically-consistent thermally direct circulation over the Florida peninsula, although at a larger scale than observed. Thunderstorm verification reveals that the meso-eta model is capable of predicting areas of organized convection, particularly during the late afternoon hours but is not capable of forecasting individual thunderstorms. Verification of cold fronts during the cool season reveals that the model is capable of forecasting a majority of cold frontal passages through east central Florida to within +1-h of observed frontal passage.

  3. Analysis of Ground Penetrating Radar’s Capability for Detecting Underground Cavities: A Case Study in Japan Cave of Taman Hutan Raya, Bandung

    NASA Astrophysics Data System (ADS)

    Azimmah, Azizatun; Widodo

    2017-04-01

    Underground cavities or voids detection is essential especially when it comes to building construction. By knowing the presence of void lying underground, one could consider whether the subsidence is likely to be prevented or not. Ground penetrating radar is a high-frequency electromagnetic sounding technique that has been developed to investigate the shallow subsurface using the contrast of dielectric properties. This geophysical method is suitable to be used to detect and locate voids beneath the surface especially those that lie in shallow depth. This research focused on how GPR could be implemented as void detector using model simulation or forward modelling. The models applied in the forward modelling process are to be made as similar as the real condition in the case study location which took place in Tahura Japan Cave, Bandung, Indonesia. Forward modelling needs to be done so in the future, we might use the modelling results as the references in measuring real GPR data in the location. We used three models that we considered fairly representative to prove that GPR is capable of detecting and locating voids underneath the ground. This research resulted in the different amplitude region around the considerably homogeneous region. The different amplitude region is characterized having an arc shape and is considered to be air which is known as the key component of voids.

  4. Model Validation for Propulsion - On the TFNS and LES Subgrid Models for a Bluff Body Stabilized Flame

    NASA Technical Reports Server (NTRS)

    Wey, Thomas

    2017-01-01

    With advances in computational power and availability of distributed computers, the use of even the most complex of turbulent chemical interaction models in combustors and coupled analysis of combustors and turbines is now possible and more and more affordable for realistic geometries. Recent more stringent emission standards have enticed the development of more fuel-efficient and low-emission combustion system for aircraft gas turbine applications. It is known that the NOx emissions tend to increase dramatically with increasing flame temperature. It is well known that the major difficulty, when modeling the turbulence-chemistry interaction, lies in the high non-linearity of the reaction rate expressed in terms of the temperature and species mass fractions. The transport filtered density function (FDF) model and the linear eddy model (LEM), which both use local instantaneous values of the temperature and mass fractions, have been shown to often provide more accurate results of turbulent combustion. In the present, the time-filtered Navier-Stokes (TFNS) approach capable of capturing unsteady flow structures important for turbulent mixing in the combustion chamber and two different subgrid models, LEM-like and EUPDF-like, capable of emulating the major processes occurring in the turbulence-chemistry interaction will be used to perform reacting flow simulations of a selected test case. The selected test case from the Volvo Validation Rig was documented by Sjunnesson.

  5. Developments in Coastal Ocean Modeling

    NASA Astrophysics Data System (ADS)

    Allen, J. S.

    2001-12-01

    Capabilities in modeling continental shelf flow fields have improved markedly in the last several years. Progress is being made toward the long term scientific goal of utilizing numerical circulation models to interpolate, or extrapolate, necessarily limited field measurements to provide additional full-field information describing the behavior of, and providing dynamical rationalizations for, complex observed coastal flow. The improvement in modeling capabilities has been due to several factors including an increase in computer power and, importantly, an increase in experience of modelers in formulating relevant numerical experiments and in analyzing model results. We demonstrate present modeling capabilities and limitations by discussion of results from recent studies of shelf circulation off Oregon and northern California (joint work with Newberger, Gan, Oke, Pullen, and Wijesekera). Strong interactions between wind-forced coastal currents and continental shelf topography characterize the flow regimes in these cases. Favorable comparisons of model and measured alongshore currents and other variables provide confidence in the model-produced fields. The dependence of the mesoscale circulation, including upwelling and downwelling fronts and flow instabilities, on the submodel used to parameterize the effects of small scale turbulence, is discussed. Analyses of model results to provide explanations for the observed, but previously unexplained, alongshore variability in the intensity of coastal upwelling, which typically results in colder surface water south of capes, and the observed development in some locations of northward currents near the coast in response to the relaxation of southward winds, are presented.

  6. PathCase-SB architecture and database design

    PubMed Central

    2011-01-01

    Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889

  7. Leveraging Modeling Approaches: Reaction Networks and Rules

    PubMed Central

    Blinov, Michael L.; Moraru, Ion I.

    2012-01-01

    We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high resolution and/or high throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatio-temporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks – the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks. PMID:22161349

  8. Leveraging modeling approaches: reaction networks and rules.

    PubMed

    Blinov, Michael L; Moraru, Ion I

    2012-01-01

    We have witnessed an explosive growth in research involving mathematical models and computer simulations of intracellular molecular interactions, ranging from metabolic pathways to signaling and gene regulatory networks. Many software tools have been developed to aid in the study of such biological systems, some of which have a wealth of features for model building and visualization, and powerful capabilities for simulation and data analysis. Novel high-resolution and/or high-throughput experimental techniques have led to an abundance of qualitative and quantitative data related to the spatiotemporal distribution of molecules and complexes, their interactions kinetics, and functional modifications. Based on this information, computational biology researchers are attempting to build larger and more detailed models. However, this has proved to be a major challenge. Traditionally, modeling tools require the explicit specification of all molecular species and interactions in a model, which can quickly become a major limitation in the case of complex networks - the number of ways biomolecules can combine to form multimolecular complexes can be combinatorially large. Recently, a new breed of software tools has been created to address the problems faced when building models marked by combinatorial complexity. These have a different approach for model specification, using reaction rules and species patterns. Here we compare the traditional modeling approach with the new rule-based methods. We make a case for combining the capabilities of conventional simulation software with the unique features and flexibility of a rule-based approach in a single software platform for building models of molecular interaction networks.

  9. Consistency of QSAR models: Correct split of training and test sets, ranking of models and performance parameters.

    PubMed

    Rácz, A; Bajusz, D; Héberger, K

    2015-01-01

    Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.

  10. Lithium and age of pre-main sequence stars: the case of Parenago 1802

    NASA Astrophysics Data System (ADS)

    Giarrusso, M.; Tognelli, E.; Catanzaro, G.; Degl'Innocenti, S.; Dell'Omodarme, M.; Lamia, L.; Leone, F.; Pizzone, R. G.; Prada Moroni, P. G.; Romano, S.; Spitaleri, C.

    2016-04-01

    With the aim to test the present capability of the stellar surface lithium abundance in providing an estimation for the age of PMS stars, we analyze the case of the detached, double-lined, eclipsing binary system PAR 1802. For this system, the lithium age has been compared with the theoretical one, as estimated by applying a Bayesian analysis method on a large grid of stellar evolutionary models. The models have been computed for several values of chemical composition and mixing length, by means of the code FRANEC updated with the Trojan Horse reaction rates involving lithium burning.

  11. On the Effect of an Anisotropy-Resolving Subgrid-Scale Model on Turbulent Vortex Motions

    DTIC Science & Technology

    2014-09-19

    sense, the model by Abe (2013) can be named the ”stabilized mixed model” ( SMM , hereafter). Furthermore, considering the basic concept of the mixed model...with SMM . Further investigations of this ex- tended anisotropic SGS model will be necessary in fu- ture studies. 3 Computational Conditions Although the...basic capability of the SMM was val- idated by application to some test cases (Abe, 2013; Abe 2014), there still remain several points to be fur

  12. Modeling of Aerosols in Post-Combustor Flow Path and Sampling System

    NASA Technical Reports Server (NTRS)

    Wey, Thomas; Liu, Nan-Suey

    2006-01-01

    The development and application of a multi-dimensional capability for modeling and simulation of aviation-sourced particle emissions and their precursors are elucidated. Current focus is on the role of the flow and thermal environments. The cases investigated include a film cooled turbine blade, the first-stage of a high-pressure turbine, the sampling probes, the sampling lines, and a pressure reduction chamber.

  13. On the Capabilities of Using AIRSAR Data in Surface Energy/Water Balance Studies

    NASA Technical Reports Server (NTRS)

    Moreno, Jose F.; Saatchi, Susan S.

    1996-01-01

    The capabilities of using remote sensing data, and in particular multifrequency/multipolarization SAR data, like AIRSAR, for the retrieval of surface parameters, depend considerably on the specificity of each application. The potentials, and limitations, of SAR data in ecological investigations are well known. Because the chemistry is a major component in such studies and because of the almost lacking chemical information at the wavelengths of SAR data, the capabilities of using SAR-derived information in such studies are considerably limited. However, in the case of surface energy/water balance studies, the determination of the amount of water content, both in the soil and in the plants, is a major component in all modeling approaches. As the information about water content is present in the SAR signal, then the role of SAR data in studies where water content is to be determined becomes clearly predominant. Another situation where the role of SAR data becomes dominant over other remote sensing systems is the case of dense canopies. Because of the penetration capabilities of microwave data, which is especially superior as compared to optical data, information about the canopy as a whole and even the underlying soil is contained in the SAR data, while only the top canopy provides the information content in the case of optical data. In the case of relatively dense canopies, as has been demonstrated in this study, such different penetration capabilities provide very different results in terms of the derived total canopy water content, for instance. However, although all such capabilities are well known, unfortunately there are also well known limitations. Apart from calibration-related aspects (that we will not consider in this study), and apart from other intrinsic problems (like image noise, topographic corrections, etc.) which also significantly affect the derived results, we will concentrate on the problem of extracting information from the data. Even at this level, methods are still not fully well established, especially over vegetation-covered areas. In this paper, an algorithm is described which allows derivation of three fundamental parameters from SAR data: soil moisture, soil roughness and canopy water content, accounting for the effects of vegetation cover by using optical (Landsat) data as auxiliary. Capabilities and limitations of the data and algorithms are discussed, as well as possibilities to use these data in energy/water balance modeling studies. All the data used in this study were acquired as part of the Intensive Observation Period in June-July 1991 (European Multisensor Aircraft Campaign-91), as part of the European Field Experiment in a Desertification- threatened Area (EFEDA), a European contribution to the global-change research sponsored by the IGBP program (Bolle et al., 1993).

  14. ALGE3D: A Three-Dimensional Transport Model

    NASA Astrophysics Data System (ADS)

    Maze, G. M.

    2017-12-01

    Of the top 10 most populated US cities from a 2015 US Census Bureau estimate, 7 of the cities are situated near the ocean, a bay, or on one of the Great Lakes. A contamination of the water ways in the United States could be devastating to the economy (through tourism and industries such as fishing), public health (from direct contact, or contaminated drinking water), and in some cases even infrastructure (water treatment plants). Current national response models employed by emergency response agencies have well developed models to simulate the effects of hazardous contaminants in riverine systems that are primarily driven by one-dimensional flows; however in more complex systems, such as tidal estuaries, bays, or lakes, a more complex model is needed. While many models exist, none are capable of quick deployment in emergency situations that could contain a variety of release situations including a mixture of both particulate and dissolved chemicals in a complex flow area. ALGE3D, developed at the Department of Energy's (DOE) Savannah River National Laboratory (SRNL), is a three-dimensional hydrodynamic code which solves the momentum, mass, and energy conservation equations to predict the movement and dissipation of thermal or dissolved chemical plumes discharged into cooling lakes, rivers, and estuaries. ALGE3D is capable of modeling very complex flows, including areas with tidal flows which include wetting and drying of land. Recent upgrades have increased the capabilities including the transport of particulate tracers, allowing for more complete modeling of the transport of pollutants. In addition the model is capable of coupling with a one-dimension riverine transport model or a two-dimension atmospheric deposition model in the event that a contamination event occurs upstream or upwind of the water body.

  15. Anisotropic adaptive mesh generation in two dimensions for CFD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borouchaki, H.; Castro-Diaz, M.J.; George, P.L.

    This paper describes the extension of the classical Delaunay method in the case where anisotropic meshes are required such as in CFD when the modelized physic is strongly directional. The way in which such a mesh generation method can be incorporated in an adaptative loop of CFD as well as the case of multicriterium adaptation are discussed. Several concrete application examples are provided to illustrate the capabilities of the proposed method.

  16. Facilitating Employees' and Students' Process towards Nascent Entrepreneurship

    ERIC Educational Resources Information Center

    Hietanen, Lenita

    2015-01-01

    Purpose: The purpose of this paper is to investigate a model for facilitating employees' and full-time, non-business students' entrepreneurial capabilities during their optional entrepreneurship studies at one Finnish Open University. Design/methodology/approach: The case study investigates the course in which transitions from employees or…

  17. Evaluation of Used Fuel Disposition in Clay-Bearing Rock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jové Colón, Carlos F.; Weck, Philippe F.; Sassani, David H.

    2014-08-01

    Radioactive waste disposal in shale/argillite rock formations has been widely considered given its desirable isolation properties (low permeability), geochemically reduced conditions, anomalous groundwater pressures, and widespread geologic occurrence. Clay/shale rock formations are characterized by their high content of clay minerals such as smectites and illites where diffusive transport and chemisorption phenomena predominate. These, in addition to low permeability, are key attributes of shale to impede radionuclide mobility. Shale host-media has been comprehensively studied in international nuclear waste repository programs as part of underground research laboratories (URLs) programs in Switzerland, France, Belgium, and Japan. These investigations, in some cases a decademore » or more long, have produced a large but fundamental body of information spanning from site characterization data (geological, hydrogeological, geochemical, geomechanical) to controlled experiments on the engineered barrier system (EBS) (barrier clay and seals materials). Evaluation of nuclear waste disposal in shale formations in the USA was conducted in the late 70’s and mid 80’s. Most of these studies evaluated the potential for shale to host a nuclear waste repository but not at the programmatic level of URLs in international repository programs. This report covers various R&D work and capabilities relevant to disposal of heat-generating nuclear waste in shale/argillite media. Integration and cross-fertilization of these capabilities will be utilized in the development and implementation of the shale/argillite reference case planned for FY15. Disposal R&D activities under the UFDC in the past few years have produced state-of-the-art modeling capabilities for coupled Thermal-Hydrological-Mechanical-Chemical (THMC), used fuel degradation (source term), and thermodynamic modeling and database development to evaluate generic disposal concepts. The THMC models have been developed for shale repository leveraging in large part on the information garnered in URLs and laboratory data to test and demonstrate model prediction capability and to accurately represent behavior of the EBS and the natural (barrier) system (NS). In addition, experimental work to improve our understanding of clay barrier interactions and TM couplings at high temperatures are key to evaluate thermal effects as a result of relatively high heat loads from waste and the extent of sacrificial zones in the EBS. To assess the latter, experiments and modeling approaches have provided important information on the stability and fate of barrier materials under high heat loads. This information is central to the assessment of thermal limits and the implementation of the reference case when constraining EBS properties and the repository layout (e.g., waste package and drift spacing). This report is comprised of various parts, each one describing various R&D activities applicable to shale/argillite media. For example, progress made on modeling and experimental approaches to analyze physical and chemical interactions affecting clay in the EBS, NS, and used nuclear fuel (source term) in support of R&D objectives. It also describes the development of a reference case for shale/argillite media. The accomplishments of these activities are summarized as follows: Development of a reference case for shale/argillite; Investigation of Reactive Transport and Coupled THM Processes in EBS: FY14; Update on Experimental Activities on Buffer/Backfill Interactions at elevated Pressure and Temperature; and Thermodynamic Database Development: Evaluation Strategy, Modeling Tools, First-Principles Modeling of Clay, and Sorption Database Assessment;ANL Mixed Potential Model For Used Fuel Degradation: Application to Argillite and Crystalline Rock Environments.« less

  18. A cooperative strategy for parameter estimation in large scale systems biology models.

    PubMed

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R

    2012-06-22

    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.

  19. A cooperative strategy for parameter estimation in large scale systems biology models

    PubMed Central

    2012-01-01

    Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112

  20. Adaptive neural network motion control for aircraft under uncertainty conditions

    NASA Astrophysics Data System (ADS)

    Efremov, A. V.; Tiaglik, M. S.; Tiumentsev, Yu V.

    2018-02-01

    We need to provide motion control of modern and advanced aircraft under diverse uncertainty conditions. This problem can be solved by using adaptive control laws. We carry out an analysis of the capabilities of these laws for such adaptive systems as MRAC (Model Reference Adaptive Control) and MPC (Model Predictive Control). In the case of a nonlinear control object, the most efficient solution to the adaptive control problem is the use of neural network technologies. These technologies are suitable for the development of both a control object model and a control law for the object. The approximate nature of the ANN model was taken into account by introducing additional compensating feedback into the control system. The capabilities of adaptive control laws under uncertainty in the source data are considered. We also conduct simulations to assess the contribution of adaptivity to the behavior of the system.

  1. Propeller aircraft interior noise model. II - Scale-model and flight-test comparisons

    NASA Technical Reports Server (NTRS)

    Willis, C. M.; Mayes, W. H.

    1987-01-01

    A program for predicting the sound levels inside propeller driven aircraft arising from sidewall transmission of airborne exterior noise is validated through comparisons of predictions with both scale-model test results and measurements obtained in flight tests on a turboprop aircraft. The program produced unbiased predictions for the case of the scale-model tests, with a standard deviation of errors of about 4 dB. For the case of the flight tests, the predictions revealed a bias of 2.62-4.28 dB (depending upon whether or not the data for the fourth harmonic were included) and the standard deviation of the errors ranged between 2.43 and 4.12 dB. The analytical model is shown to be capable of taking changes in the flight environment into account.

  2. Implementation of Flow Tripping Capability in the USM3D Unstructured Flow Solver

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Abdol-Harrid, Khaled S.; Campbell, Richard L.; Frink, Neal T.

    2006-01-01

    A flow tripping capability is added to an established NASA tetrahedral unstructured parallel Navier-Stokes flow solver, USM3D. The capability is based on prescribing an appropriate profile of turbulence model variables to energize the boundary layer in a plane normal to a specified trip region on the body surface. We demonstrate this approach using the k-e two-equation turbulence model of USM3D. Modification to the solution procedure primarily consists of developing a data structure to identify all unstructured tetrahedral grid cells located in the plane normal to a specified surface trip region and computing a function based on the mean flow solution to specify the modified profile of the turbulence model variables. We leverage this data structure and also show an adjunct approach that is based on enforcing a laminar flow condition on the otherwise fully turbulent flow solution in user specified region. The latter approach is applied for the solutions obtained using other one- and two-equation turbulence models of USM3D. A key ingredient of the present capability is the use of a graphical user-interface tool PREDISC to define a trip region on the body surface in an existing grid. Verification of the present modifications is demonstrated on three cases, namely, a flat plate, the RAE2822 airfoil, and the DLR F6 wing-fuselage configuration.

  3. Implementation of Flow Tripping Capability in the USM3D Unstructured Flow Solver

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Abdol-Hamid, Khaled S.; Campbell, Richard L.; Frink, Neal T.

    2006-01-01

    A flow tripping capability is added to an established NASA tetrahedral unstructured parallel Navier-Stokes flow solver, USM3D. The capability is based on prescribing an appropriate profile of turbulence model variables to energize the boundary layer in a plane normal to a specified trip region on the body surface. We demonstrate this approach using the k-epsilon two-equation turbulence model of USM3D. Modification to the solution procedure primarily consists of developing a data structure to identify all unstructured tetrahedral grid cells located in the plane normal to a specified surface trip region and computing a function based on the mean flow solution to specify the modified profile of the turbulence model variables. We leverage this data structure and also show an adjunct approach that is based on enforcing a laminar flow condition on the otherwise fully turbulent flow solution in user-specified region. The latter approach is applied for the solutions obtained using other one-and two-equation turbulence models of USM3D. A key ingredient of the present capability is the use of a graphical user-interface tool PREDISC to define a trip region on the body surface in an existing grid. Verification of the present modifications is demonstrated on three cases, namely, a flat plate, the RAE2822 airfoil, and the DLR F6 wing-fuselage configuration.

  4. SU-E-T-760: Tolerance Design for Site-Specific Range in Proton Patient QA Process Using the Six Sigma Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lah, J; Shin, D; Kim, G

    Purpose: To show how tolerance design and tolerancing approaches can be used to predict and improve the site-specific range in patient QA process in implementing the Six Sigma. Methods: In this study, patient QA plans were selected according to 6 site-treatment groups: head &neck (94 cases), spine (76 cases), lung (89 cases), liver (53 cases), pancreas (55 cases), and prostate (121 cases), treated between 2007 and 2013. We evaluated a model of the Six Sigma that determines allowable deviations in design parameters and process variables in patient-specific QA, where possible, tolerance may be loosened, then customized if it necessary tomore » meet the functional requirements. A Six Sigma problem-solving methodology is known as DMAIC phases, which are used stand for: Define a problem or improvement opportunity, Measure process performance, Analyze the process to determine the root causes of poor performance, Improve the process by fixing root causes, Control the improved process to hold the gains. Results: The process capability for patient-specific range QA is 0.65 with only ±1 mm of tolerance criteria. Our results suggested the tolerance level of ±2–3 mm for prostate and liver cases and ±5 mm for lung cases. We found that customized tolerance between calculated and measured range reduce that patient QA plan failure and almost all sites had failure rates less than 1%. The average QA time also improved from 2 hr to less than 1 hr for all including planning and converting process, depth-dose measurement and evaluation. Conclusion: The objective of tolerance design is to achieve optimization beyond that obtained through QA process improvement and statistical analysis function detailing to implement a Six Sigma capable design.« less

  5. Including resonances in the multiperipheral model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinsky, S.S.; Snider, D.R.; Thomas, G.H.

    1973-10-01

    A simple generalization of the multiperipheral model (MPM) and the Mueller--Regge Model (MRM) is given which has improved phenomenological capabilities by explicitly incorporating resonance phenomena, and still is simple enough to be an important theoretical laboratory. The model is discussed both with and without charge. In addition, the one channel, two channel, three channel and N channel cases are explicitly treated. Particular attention is paid to the constraints of charge conservation and positivity in the MRM. The recently proven equivalence between the MRM and MPM is extended to this model, and is used extensively. (auth)

  6. Analytical methods for the development of Reynolds stress closures in turbulence

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.

    1990-01-01

    Analytical methods for the development of Reynolds stress models in turbulence are reviewed in detail. Zero, one and two equation models are discussed along with second-order closures. A strong case is made for the superior predictive capabilities of second-order closure models in comparison to the simpler models. The central points are illustrated by examples from both homogeneous and inhomogeneous turbulence. A discussion of the author's views concerning the progress made in Reynolds stress modeling is also provided along with a brief history of the subject.

  7. Transitioning Enhanced Land Surface Initialization and Model Verification Capabilities to the Kenya Meteorological Department (KMD)

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Mungai, John; Sakwa, Vincent; Zavodsky, Bradley T.; Srikishen, Jayanthi; Limaye, Ashutosh; Blankenship, Clay B.

    2016-01-01

    Flooding, severe weather, and drought are key forecasting challenges for the Kenya Meteorological Department (KMD), based in Nairobi, Kenya. Atmospheric processes leading to convection, excessive precipitation and/or prolonged drought can be strongly influenced by land cover, vegetation, and soil moisture content, especially during anomalous conditions and dry/wet seasonal transitions. It is thus important to represent accurately land surface state variables (green vegetation fraction, soil moisture, and soil temperature) in Numerical Weather Prediction (NWP) models. The NASA SERVIR and the Short-term Prediction Research and Transition (SPoRT) programs in Huntsville, AL have established a working partnership with KMD to enhance its regional modeling capabilities. SPoRT and SERVIR are providing experimental land surface initialization datasets and model verification capabilities for capacity building at KMD. To support its forecasting operations, KMD is running experimental configurations of the Weather Research and Forecasting (WRF; Skamarock et al. 2008) model on a 12-km/4-km nested regional domain over eastern Africa, incorporating the land surface datasets provided by NASA SPoRT and SERVIR. SPoRT, SERVIR, and KMD participated in two training sessions in March 2014 and June 2015 to foster the collaboration and use of unique land surface datasets and model verification capabilities. Enhanced regional modeling capabilities have the potential to improve guidance in support of daily operations and high-impact weather and climate outlooks over Eastern Africa. For enhanced land-surface initialization, the NASA Land Information System (LIS) is run over Eastern Africa at 3-km resolution, providing real-time land surface initialization data in place of interpolated global model soil moisture and temperature data available at coarser resolutions. Additionally, real-time green vegetation fraction (GVF) composites from the Suomi-NPP VIIRS instrument is being incorporated into the KMD-WRF runs, using the product generated by NOAA/NESDIS. Model verification capabilities are also being transitioned to KMD using NCAR's Model *Corresponding author address: Jonathan Case, ENSCO, Inc., 320 Sparkman Dr., Room 3008, Huntsville, AL, 35805. Email: Jonathan.Case-1@nasa.gov Evaluation Tools (MET; Brown et al. 2009) software in conjunction with a SPoRT-developed scripting package, in order to quantify and compare errors in simulated temperature, moisture and precipitation in the experimental WRF model simulations. This extended abstract and accompanying presentation summarizes the efforts and training done to date to support this unique regional modeling initiative at KMD. To honor the memory of Dr. Peter J. Lamb and his extensive efforts in bolstering weather and climate science and capacity-building in Africa, we offer this contribution to the special Peter J. Lamb symposium. The remainder of this extended abstract is organized as follows. The collaborating international organizations involved in the project are presented in Section 2. Background information on the unique land surface input datasets is presented in Section 3. The hands-on training sessions from March 2014 and June 2015 are described in Section 4. Sample experimental WRF output and verification from the June 2015 training are given in Section 5. A summary is given in Section 6, followed by Acknowledgements and References.

  8. A preliminary study of air-pollution measurement by active remote-sensing techniques

    NASA Technical Reports Server (NTRS)

    Wright, M. L.; Proctor, E. K.; Gasiorek, L. S.; Liston, E. M.

    1975-01-01

    Air pollutants are identified, and the needs for their measurement from satellites and aircraft are discussed. An assessment is made of the properties of these pollutants and of the normal atmosphere, including interactions with light of various wavelengths and the resulting effects on transmission and scattering of optical signals. The possible methods for active remote measurement are described; the relative performance capabilities of double-ended and single-ended systems are compared qualitatively; and the capabilities of the several single-ended or backscattering techniques are compared quantitatively. The differential-absorption lidar (DIAL) technique is shown to be superior to the other backscattering techniques. The lidar system parameters and their relationships to the environmental factors and the properties of pollutants are examined in detail. A computer program that models both the atmosphere (including pollutants) and the lidar system is described. The performance capabilities of present and future lidar components are assessed, and projections are made of prospective measurement capabilities for future lidar systems. Following a discussion of some important operational factors that affect both the design and measurement capabilities of airborne and satellite-based lidar systems, the extensive analytical results obtained through more than 1000 individual cases analyzed with the aid of the computer program are summarized and discussed. The conclusions are presented. Recommendations are also made for additional studies to investigate cases that could not be explored adequately during this study.

  9. Multidisciplinary model-based-engineering for laser weapon systems: recent progress

    NASA Astrophysics Data System (ADS)

    Coy, Steve; Panthaki, Malcolm

    2013-09-01

    We are working to develop a comprehensive, integrated software framework and toolset to support model-based engineering (MBE) of laser weapons systems. MBE has been identified by the Office of the Director, Defense Science and Engineering as one of four potentially "game-changing" technologies that could bring about revolutionary advances across the entire DoD research and development and procurement cycle. To be effective, however, MBE requires robust underlying modeling and simulation technologies capable of modeling all the pertinent systems, subsystems, components, effects, and interactions at any level of fidelity that may be required in order to support crucial design decisions at any point in the system development lifecycle. Very often the greatest technical challenges are posed by systems involving interactions that cut across two or more distinct scientific or engineering domains; even in cases where there are excellent tools available for modeling each individual domain, generally none of these domain-specific tools can be used to model the cross-domain interactions. In the case of laser weapons systems R&D these tools need to be able to support modeling of systems involving combined interactions among structures, thermal, and optical effects, including both ray optics and wave optics, controls, atmospheric effects, target interaction, computational fluid dynamics, and spatiotemporal interactions between lasing light and the laser gain medium. To address this problem we are working to extend Comet™, to add the addition modeling and simulation capabilities required for this particular application area. In this paper we will describe our progress to date.

  10. Advanced Atmospheric Modeling for Emergency Response.

    NASA Astrophysics Data System (ADS)

    Fast, Jerome D.; O'Steen, B. Lance; Addis, Robert P.

    1995-03-01

    Atmospheric transport and diffusion models are an important part of emergency response systems for industrial facilities that have the potential to release significant quantities of toxic or radioactive material into the atmosphere. An advanced atmospheric transport and diffusion modeling system for emergency response and environmental applications, based upon a three-dimensional mesoscale model, has been developed for the U.S. Department of Energy's Savannah River Site so that complex, time-dependent flow fields not explicitly measured can be routinely simulated. To overcome some of the current computational demands of mesoscale models, two operational procedures for the advanced atmospheric transport and diffusion modeling system are described including 1) a semiprognostic calculation to produce high-resolution wind fields for local pollutant transport in the vicinity of the Savannah River Site and 2) a fully prognostic calculation to produce a regional wind field encompassing the southeastern United States for larger-scale pollutant problems. Local and regional observations and large-scale model output are used by the mesoscale model for the initial conditions, lateral boundary conditions, and four-dimensional data assimilation procedure. This paper describes the current status of the modeling system and presents two case studies demonstrating the capabilities of both modes of operation. While the results from the case studies shown in this paper are preliminary and certainly not definitive, they do suggest that the mesoscale model has the potential for improving the prognostic capabilities of atmospheric modeling for emergency response at the Savannah River Site. Long-term model evaluation will be required to determine under what conditions significant forecast errors exist.

  11. Discrete Element Modelling of Floating Debris

    NASA Astrophysics Data System (ADS)

    Mahaffey, Samantha; Liang, Qiuhua; Parkin, Geoff; Large, Andy; Rouainia, Mohamed

    2016-04-01

    Flash flooding is characterised by high velocity flows which impact vulnerable catchments with little warning time and as such, result in complex flow dynamics which are difficult to replicate through modelling. The impacts of flash flooding can be made yet more severe by the transport of both natural and anthropogenic debris, ranging from tree trunks to vehicles, wheelie bins and even storage containers, the effects of which have been clearly evident during recent UK flooding. This cargo of debris can have wide reaching effects and result in actual flood impacts which diverge from those predicted. A build-up of debris may lead to partial channel blockage and potential flow rerouting through urban centres. Build-up at bridges and river structures also leads to increased hydraulic loading which may result in damage and possible structural failure. Predicting the impacts of debris transport; however, is difficult as conventional hydrodynamic modelling schemes do not intrinsically include floating debris within their calculations. Subsequently a new tool has been developed using an emerging approach, which incorporates debris transport through the coupling of two existing modelling techniques. A 1D hydrodynamic modelling scheme has here been coupled with a 2D discrete element scheme to form a new modelling tool which predicts the motion and flow-interaction of floating debris. Hydraulic forces arising from flow around the object are applied to instigate its motion. Likewise, an equivalent opposing force is applied to fluid cells, enabling backwater effects to be simulated. Shock capturing capabilities make the tool applicable to predicting the complex flow dynamics associated with flash flooding. The modelling scheme has been applied to experimental case studies where cylindrical wooden dowels are transported by a dam-break wave. These case studies enable validation of the tool's shock capturing capabilities and the coupling technique applied between the two numerical schemes. The results show that the tool is able to adequately replicate water depth and depth-averaged velocity of a dam-break wave, as well as velocity and displacement of floating cylindrical elements, thus validating its shock capturing capabilities and the coupling technique applied for this simple test case. Future development of the tool will incorporate a 2D hydrodynamic scheme and a 3D discrete element scheme in order to model the more complex processes associated with debris transport.

  12. A study on the predictability of acute lymphoblastic leukaemia response to treatment using a hybrid oncosimulator.

    PubMed

    Ouzounoglou, Eleftherios; Kolokotroni, Eleni; Stanulla, Martin; Stamatakos, Georgios S

    2018-02-06

    Efficient use of Virtual Physiological Human (VPH)-type models for personalized treatment response prediction purposes requires a precise model parameterization. In the case where the available personalized data are not sufficient to fully determine the parameter values, an appropriate prediction task may be followed. This study, a hybrid combination of computational optimization and machine learning methods with an already developed mechanistic model called the acute lymphoblastic leukaemia (ALL) Oncosimulator which simulates ALL progression and treatment response is presented. These methods are used in order for the parameters of the model to be estimated for retrospective cases and to be predicted for prospective ones. The parameter value prediction is based on a regression model trained on retrospective cases. The proposed Hybrid ALL Oncosimulator system has been evaluated when predicting the pre-phase treatment outcome in ALL. This has been correctly achieved for a significant percentage of patient cases tested (approx. 70% of patients). Moreover, the system is capable of denying the classification of cases for which the results are not trustworthy enough. In that case, potentially misleading predictions for a number of patients are avoided, while the classification accuracy for the remaining patient cases further increases. The results obtained are particularly encouraging regarding the soundness of the proposed methodologies and their relevance to the process of achieving clinical applicability of the proposed Hybrid ALL Oncosimulator system and VPH models in general.

  13. Validation of a Three-Dimensional Ablation and Thermal Response Simulation Code

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Milos, Frank S.; Gokcen, Tahir

    2010-01-01

    The 3dFIAT code simulates pyrolysis, ablation, and shape change of thermal protection materials and systems in three dimensions. The governing equations, which include energy conservation, a three-component decomposition model, and a surface energy balance, are solved with a moving grid system to simulate the shape change due to surface recession. This work is the first part of a code validation study for new capabilities that were added to 3dFIAT. These expanded capabilities include a multi-block moving grid system and an orthotropic thermal conductivity model. This paper focuses on conditions with minimal shape change in which the fluid/solid coupling is not necessary. Two groups of test cases of 3dFIAT analyses of Phenolic Impregnated Carbon Ablator in an arc-jet are presented. In the first group, axisymmetric iso-q shaped models are studied to check the accuracy of three-dimensional multi-block grid system. In the second group, similar models with various through-the-thickness conductivity directions are examined. In this group, the material thermal response is three-dimensional, because of the carbon fiber orientation. Predictions from 3dFIAT are presented and compared with arcjet test data. The 3dFIAT predictions agree very well with thermocouple data for both groups of test cases.

  14. Applying PCI in Combination Swivel Head Wrench

    NASA Astrophysics Data System (ADS)

    Chen, Tsang-Chiang; Yang, Chun-Ming; Hsu, Chang-Hsien; Hung, Hsiang-Wen

    2017-09-01

    Taiwan’s traditional industries are subject to competition in the era of globalization and environmental change, the industry is facing economic pressure and shock, and now sustainable business can only continue to improve production efficiency and quality of technology, in order to stabilize the market, to obtain high occupancy. The use of process capability indices to monitor the quality of the ratchet wrench to find the key function of the dual-use ratchet wrench, the actual measurement data, The use of process capability Cpk index analysis, and draw Process Capability Analysis Chart model. Finally, this study explores the current situation of this case and proposes a lack of improvement and improvement methods to improve the overall quality and thereby enhance the overall industry.

  15. A Reduced Order Model for Whole-Chip Thermal Analysis of Microfluidic Lab-on-a-Chip Systems

    PubMed Central

    Wang, Yi; Song, Hongjun; Pant, Kapil

    2013-01-01

    This paper presents a Krylov subspace projection-based Reduced Order Model (ROM) for whole microfluidic chip thermal analysis, including conjugate heat transfer. Two key steps in the reduced order modeling procedure are described in detail, including (1) the acquisition of a 3D full-scale computational model in the state-space form to capture the dynamic thermal behavior of the entire microfluidic chip; and (2) the model order reduction using the Block Arnoldi algorithm to markedly lower the dimension of the full-scale model. Case studies using practically relevant thermal microfluidic chip are undertaken to establish the capability and to evaluate the computational performance of the reduced order modeling technique. The ROM is compared against the full-scale model and exhibits good agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) and over three orders-of-magnitude acceleration in computational speed. The salient model reusability and real-time simulation capability renders it amenable for operational optimization and in-line thermal control and management of microfluidic systems and devices. PMID:24443647

  16. Are metastases from metastases clinical relevant? Computer modelling of cancer spread in a case of hepatocellular carcinoma.

    PubMed

    Bethge, Anja; Schumacher, Udo; Wree, Andreas; Wedemann, Gero

    2012-01-01

    Metastasis formation remains an enigmatic process and one of the main questions recently asked is whether metastases are able to generate further metastases. Different models have been proposed to answer this question; however, their clinical significance remains unclear. Therefore a computer model was developed that permits comparison of the different models quantitatively with clinical data and that additionally predicts the outcome of treatment interventions. The computer model is based on discrete events simulation approach. On the basis of a case from an untreated patient with hepatocellular carcinoma and its multiple metastases in the liver, it was evaluated whether metastases are able to metastasise and in particular if late disseminated tumour cells are still capable to form metastases. Additionally, the resection of the primary tumour was simulated. The simulation results were compared with clinical data. The simulation results reveal that the number of metastases varies significantly between scenarios where metastases metastasise and scenarios where they do not. In contrast, the total tumour mass is nearly unaffected by the two different modes of metastasis formation. Furthermore, the results provide evidence that metastasis formation is an early event and that late disseminated tumour cells are still capable of forming metastases. Simulations also allow estimating how the resection of the primary tumour delays the patient's death. The simulation results indicate that for this particular case of a hepatocellular carcinoma late metastases, i.e., metastases from metastases, are irrelevant in terms of total tumour mass. Hence metastases seeded from metastases are clinically irrelevant in our model system. Only the first metastases seeded from the primary tumour contribute significantly to the tumour burden and thus cause the patient's death.

  17. The Dynamical Core Model Intercomparison Project (DCMIP-2016): Results of the Supercell Test Case

    NASA Astrophysics Data System (ADS)

    Zarzycki, C. M.; Reed, K. A.; Jablonowski, C.; Ullrich, P. A.; Kent, J.; Lauritzen, P. H.; Nair, R. D.

    2016-12-01

    The 2016 Dynamical Core Model Intercomparison Project (DCMIP-2016) assesses the modeling techniques for global climate and weather models and was recently held at the National Center for Atmospheric Research (NCAR) in conjunction with a two-week summer school. Over 12 different international modeling groups participated in DCMIP-2016 and focused on the evaluation of the newest non-hydrostatic dynamical core designs for future high-resolution weather and climate models. The paper highlights the results of the third DCMIP-2016 test case, which is an idealized supercell storm on a reduced-radius Earth. The supercell storm test permits the study of a non-hydrostatic moist flow field with strong vertical velocities and associated precipitation. This test assesses the behavior of global modeling systems at extremely high spatial resolution and is used in the development of next-generation numerical weather prediction capabilities. In this regime the effective grid spacing is very similar to the horizontal scale of convective plumes, emphasizing resolved non-hydrostatic dynamics. The supercell test case sheds light on the physics-dynamics interplay and highlights the impact of diffusion on model solutions.

  18. Identifying strengths and weaknesses of Quality Management Unit University of Sumatera Utara software using SCAMPI C

    NASA Astrophysics Data System (ADS)

    Gunawan, D.; Amalia, A.; Rahmat, R. F.; Muchtar, M. A.; Siregar, I.

    2018-02-01

    Identification of software maturity level is a technique to determine the quality of the software. By identifying the software maturity level, the weaknesses of the software can be observed. As a result, the recommendations might be a reference for future software maintenance and development. This paper discusses the software Capability Level (CL) with case studies on Quality Management Unit (Unit Manajemen Mutu) University of Sumatera Utara (UMM-USU). This research utilized Standard CMMI Appraisal Method for Process Improvement class C (SCAMPI C) model with continuous representation. This model focuses on activities for developing quality products and services. The observation is done in three process areas, such as Project Planning (PP), Project Monitoring and Control (PMC), and Requirements Management (REQM). According to the measurement of software capability level for UMM-USU software, turns out that the capability level for the observed process area is in the range of CL1 and CL2. Planning Project (PP) is the only process area which reaches capability level 2, meanwhile, PMC and REQM are still in CL 1 or in performed level. This research reveals several weaknesses of existing UMM-USU software. Therefore, this study proposes several recommendations for UMM-USU to improve capability level for observed process areas.

  19. The use of near-infrared photography to image fired bullets and cartridge cases.

    PubMed

    Stein, Darrell; Yu, Jorn Chi Chung

    2013-09-01

    An imaging technique that is capable of reducing glare, reflection, and shadows can greatly assist the process of toolmarks comparison. In this work, a camera with near-infrared (near-IR) photographic capabilities was fitted with an IR filter, mounted to a stereomicroscope, and used to capture images of toolmarks on fired bullets and cartridge cases. Fluorescent, white light-emitting diode (LED), and halogen light sources were compared for use with the camera. Test-fired bullets and cartridge cases from different makes and models of firearms were photographed under either near-IR or visible light. With visual comparisons, near-IR images and visible light images were comparable. The use of near-IR photography did not reveal more details and could not effectively eliminate reflections and glare associated with visible light photography. Near-IR photography showed little advantages in manual examination of fired evidence when it was compared with visible light (regular) photography. © 2013 American Academy of Forensic Sciences.

  20. BASINs and WEPP Climate Assessment Tools (CAT): Case ...

    EPA Pesticide Factsheets

    EPA announced the release of the final report, BASINs and WEPP Climate Assessment Tools (CAT): Case Study Guide to Potential Applications. This report supports application of two recently developed water modeling tools, the Better Assessment Science Integrating point & Non-point Sources (BASINS) and the Water Erosion Prediction Project Climate Assessment Tool (WEPPCAT). The report presents a series of short case studies designed to illustrate the capabilities of these tools for conducting scenario based assessments of the potential effects of climate change on streamflow and water quality. This report presents a series of short, illustrative case studies using the BASINS and WEPP climate assessment tools.

  1. Software sensors for biomass concentration in a SSC process using artificial neural networks and support vector machine.

    PubMed

    Acuña, Gonzalo; Ramirez, Cristian; Curilem, Millaray

    2014-01-01

    The lack of sensors for some relevant state variables in fermentation processes can be coped by developing appropriate software sensors. In this work, NARX-ANN, NARMAX-ANN, NARX-SVM and NARMAX-SVM models are compared when acting as software sensors of biomass concentration for a solid substrate cultivation (SSC) process. Results show that NARMAX-SVM outperforms the other models with an SMAPE index under 9 for a 20 % amplitude noise. In addition, NARMAX models perform better than NARX models under the same noise conditions because of their better predictive capabilities as they include prediction errors as inputs. In the case of perturbation of initial conditions of the autoregressive variable, NARX models exhibited better convergence capabilities. This work also confirms that a difficult to measure variable, like biomass concentration, can be estimated on-line from easy to measure variables like CO₂ and O₂ using an adequate software sensor based on computational intelligence techniques.

  2. Structural Dynamics Modeling of HIRENASD in Support of the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Wieseman, Carol; Chwalowski, Pawel; Heeg, Jennifer; Boucke, Alexander; Castro, Jack

    2013-01-01

    An Aeroelastic Prediction Workshop (AePW) was held in April 2012 using three aeroelasticity case study wind tunnel tests for assessing the capabilities of various codes in making aeroelasticity predictions. One of these case studies was known as the HIRENASD model that was tested in the European Transonic Wind Tunnel (ETW). This paper summarizes the development of a standardized enhanced analytical HIRENASD structural model for use in the AePW effort. The modifications to the HIRENASD finite element model were validated by comparing modal frequencies, evaluating modal assurance criteria, comparing leading edge, trailing edge and twist of the wing with experiment and by performing steady and unsteady CFD analyses for one of the test conditions on the same grid, and identical processing of results.

  3. Ground-based telescope pointing and tracking optimization using a neural controller.

    PubMed

    Mancini, D; Brescia, M; Schipani, P

    2003-01-01

    Neural network models (NN) have emerged as important components for applications of adaptive control theories. Their basic generalization capability, based on acquired knowledge, together with execution rapidity and correlation ability between input stimula, are basic attributes to consider NN as an extremely powerful tool for on-line control of complex systems. By a control system point of view, not only accuracy and speed, but also, in some cases, a high level of adaptation capability is required in order to match all working phases of the whole system during its lifetime. This is particularly remarkable for a new generation ground-based telescope control system. Infact, strong changes in terms of system speed and instantaneous position error tolerance are necessary, especially in case of trajectory disturb induced by wind shake. The classical control scheme adopted in such a system is based on the proportional integral (PI) filter, already applied and implemented on a large amount of new generation telescopes, considered as a standard in this technological environment. In this paper we introduce the concept of a new approach, the neural variable structure proportional integral, (NVSPI), related to the implementation of a standard multi layer perceptron network in new generation ground-based Alt-Az telescope control systems. Its main purpose is to improve adaptive capability of the Variable structure proportional integral model, an already innovative control scheme recently introduced by authors [Proc SPIE (1997)], based on a modified version of classical PI control model, in terms of flexibility and accuracy of the dynamic response range also in presence of wind noise effects. The realization of a powerful well tested and validated telescope model simulation system allowed the possibility to directly compare performances of the two control schemes on simulated tracking trajectories, revealing extremely encouraging results in terms of NVSPI control robustness and reliability.

  4. Application fields for the new Object Management Group (OMG) Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN) in the perioperative field.

    PubMed

    Wiemuth, M; Junger, D; Leitritz, M A; Neumann, J; Neumuth, T; Burgert, O

    2017-08-01

    Medical processes can be modeled using different methods and notations. Currently used modeling systems like Business Process Model and Notation (BPMN) are not capable of describing the highly flexible and variable medical processes in sufficient detail. We combined two modeling systems, Business Process Management (BPM) and Adaptive Case Management (ACM), to be able to model non-deterministic medical processes. We used the new Standards Case Management Model and Notation (CMMN) and Decision Management Notation (DMN). First, we explain how CMMN, DMN and BPMN could be used to model non-deterministic medical processes. We applied this methodology to model 79 cataract operations provided by University Hospital Leipzig, Germany, and four cataract operations provided by University Eye Hospital Tuebingen, Germany. Our model consists of 85 tasks and about 20 decisions in BPMN. We were able to expand the system with more complex situations that might appear during an intervention. An effective modeling of the cataract intervention is possible using the combination of BPM and ACM. The combination gives the possibility to depict complex processes with complex decisions. This combination allows a significant advantage for modeling perioperative processes.

  5. Risk Factors Analysis and Death Prediction in Some Life-Threatening Ailments Using Chi-Square Case-Based Reasoning (χ2 CBR) Model.

    PubMed

    Adeniyi, D A; Wei, Z; Yang, Y

    2018-01-30

    A wealth of data are available within the health care system, however, effective analysis tools for exploring the hidden patterns in these datasets are lacking. To alleviate this limitation, this paper proposes a simple but promising hybrid predictive model by suitably combining the Chi-square distance measurement with case-based reasoning technique. The study presents the realization of an automated risk calculator and death prediction in some life-threatening ailments using Chi-square case-based reasoning (χ 2 CBR) model. The proposed predictive engine is capable of reducing runtime and speeds up execution process through the use of critical χ 2 distribution value. This work also showcases the development of a novel feature selection method referred to as frequent item based rule (FIBR) method. This FIBR method is used for selecting the best feature for the proposed χ 2 CBR model at the preprocessing stage of the predictive procedures. The implementation of the proposed risk calculator is achieved through the use of an in-house developed PHP program experimented with XAMP/Apache HTTP server as hosting server. The process of data acquisition and case-based development is implemented using the MySQL application. Performance comparison between our system, the NBY, the ED-KNN, the ANN, the SVM, the Random Forest and the traditional CBR techniques shows that the quality of predictions produced by our system outperformed the baseline methods studied. The result of our experiment shows that the precision rate and predictive quality of our system in most cases are equal to or greater than 70%. Our result also shows that the proposed system executes faster than the baseline methods studied. Therefore, the proposed risk calculator is capable of providing useful, consistent, faster, accurate and efficient risk level prediction to both the patients and the physicians at any time, online and on a real-time basis.

  6. Evaluation of the Emergency Response Dose Assessment System(ERDAS)

    NASA Technical Reports Server (NTRS)

    Evans, Randolph J.; Lambert, Winifred C.; Manobianco, John T.; Taylor, Gregory E.; Wheeler, Mark M.; Yersavich, Ann M.

    1996-01-01

    The emergency response dose assessment system (ERDAS) is a protype software and hardware system configured to produce routine mesoscale meteorological forecasts and enhanced dispersion estimates on an operational basis for the Kennedy Space Center (KSC)/Cape Canaveral Air Station (CCAS) region. ERDAS provides emergency response guidance to operations at KSC/CCAS in the case of an accidental hazardous material release or an aborted vehicle launch. This report describes the evaluation of ERDAS including: evaluation of sea breeze predictions, comparison of launch plume location and concentration predictions, case study of a toxic release, evaluation of model sensitivity to varying input parameters, evaluation of the user interface, assessment of ERDA's operational capabilities, and a comparison of ERDAS models to the ocean breeze dry gultch diffusion model.

  7. On the consistency of Reynolds stress turbulence closures with hydrodynamic stability theory

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Abid, Ridha; Blaisdell, Gregory A.

    1995-01-01

    The consistency of second-order closure models with results from hydrodynamic stability theory is analyzed for the simplified case of homogeneous turbulence. In a recent study, Speziale, Gatski, and MacGiolla Mhuiris showed that second-order closures are capable of yielding results that are consistent with hydrodynamic stability theory for the case of homogeneous shear flow in a rotating frame. It is demonstrated in this paper that this success is due to the fact that the stability boundaries for rotating homogeneous shear flow are not dependent on the details of the spatial structure of the disturbances. For those instances where they are -- such as in the case of elliptical flows where the instability mechanism is more subtle -- the results are not so favorable. The origins and extent of this modeling problem are examined in detail along with a possible resolution based on rapid distortion theory (RDT) and its implications for turbulence modeling.

  8. 75 FR 68693 - Airworthiness Directives; Airbus Model A380-800 Series Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-09

    ... may lead to a degraded leak detection capability have been reported. In case of hot air leakage, the... inspection in production and on in-service aircraft, a number of OverHeat Detection System (OHDS... could allow undetected leakage of bleed air from the hot engine/auxiliary power unit causing damage to...

  9. Ontological Relations and the Capability Maturity Model Applied in Academia

    ERIC Educational Resources Information Center

    de Oliveira, Jerônimo Moreira; Campoy, Laura Gómez; Vilarino, Lilian

    2015-01-01

    This work presents a new approach to the discovery, identification and connection of ontological elements within the domain of characterization in learning organizations. In particular, the study can be applied to contexts where organizations require planning, logic, balance, and cognition in knowledge creation scenarios, which is the case for the…

  10. A Conceptual Design Model for CBT Development: A NATO Case Study

    ERIC Educational Resources Information Center

    Kok, Ayse

    2014-01-01

    CBT (computer-based training) can benefit from the modern multimedia tools combined with network capabilities to overcame traditional education. The objective of this paper is focused on CBT development to improve strategic decision-making with regard to air command and control system for NATO staff in virtual environment. A conceptual design for…

  11. Eddy Current Influences on the Dynamic Behaviour of Magnetic Suspension Systems

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.; Bloodgood, Dale V.

    1998-01-01

    This report will summarize some results from a multi-year research effort at NASA Langley Research Center aimed at the development of an improved capability for practical modelling of eddy current effects in magnetic suspension systems. Particular attention is paid to large-gap systems, although generic results applicable to both large-gap and small-gap systems are presented. It is shown that eddy currents can significantly affect the dynamic behavior of magnetic suspension systems, but that these effects can be amenable to modelling and measurement. Theoretical frameworks are presented, together with comparisons of computed and experimental data particularly related to the Large Angle Magnetic Suspension Test Fixture at NASA Langley Research Center, and the Annular Suspension and Pointing System at Old Dominion University. In both cases, practical computations are capable of providing reasonable estimates of important performance-related parameters. The most difficult case is seen to be that of eddy currents in highly permeable material, due to the low skin depths. Problems associated with specification of material properties and areas for future research are discussed.

  12. Ball Bearing Analysis with the ORBIS Tool

    NASA Technical Reports Server (NTRS)

    Halpin, Jacob D.

    2016-01-01

    Ball bearing design is critical to the success of aerospace mechanisms. Key bearing performance parameters, such as load capability, stiffness, torque, and life all depend on accurate determination of the internal load distribution. Hence, a good analytical bearing tool that provides both comprehensive capabilities and reliable results becomes a significant asset to the engineer. This paper introduces the ORBIS bearing tool. A discussion of key modeling assumptions and a technical overview is provided. Numerous validation studies and case studies using the ORBIS tool are presented. All results suggest the ORBIS code closely correlates to predictions on bearing internal load distributions, stiffness, deflection and stresses.

  13. Active Piezoelectric Structures for Tip Clearance Management Assessed

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Managing blade tip clearance in turbomachinery stages is critical to developing advanced subsonic propulsion systems. Active casing structures with embedded piezoelectric actuators appear to be a promising solution. They can control static and dynamic tip clearance, compensate for uneven deflections, and accomplish electromechanical coupling at the material level. In addition, they have a compact design. To assess the feasibility of this concept and assist the development of these novel structures, the NASA Lewis Research Center developed in-house computational capabilities for composite structures with piezoelectric actuators and sensors, and subsequently used them to simulate candidate active casing structures. The simulations indicated the potential of active casings to modify the blade tip clearance enough to improve stage efficiency. They also provided valuable design information, such as preliminary actuator configurations (number and location) and the corresponding voltage patterns required to compensate for uneven casing deformations. An active ovalization of a casing with four discrete piezoceramic actuators attached on the outer surface is shown. The center figure shows the predicted radial displacements along the hoop direction that are induced when electrostatic voltage is applied at the piezoceramic actuators. This work, which has demonstrated the capabilities of in-house computational models to analyze and design active casing structures, is expected to contribute toward the development of advanced subsonic engines.

  14. A Methodological Approach for Conducting a Business Case Analysis (BCA) of Zephyr Joint Capability Technology Demonstration (JCTD)

    DTIC Science & Technology

    2008-12-01

    Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the...Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1 . AGENCY USE ONLY...on Investment (ROI) of the Zephyr system. This is achieved by ( 1 ) Developing a model to carry out Business Case Analysis (BCA) of JCTDs, including

  15. Elemental representation and configural mappings: combining elemental and configural theories of associative learning.

    PubMed

    McLaren, I P L; Forrest, C L; McLaren, R P

    2012-09-01

    In this article, we present our first attempt at combining an elemental theory designed to model representation development in an associative system (based on McLaren, Kaye, & Mackintosh, 1989) with a configural theory that models associative learning and memory (McLaren, 1993). After considering the possible advantages of such a combination (and some possible pitfalls), we offer a hybrid model that allows both components to produce the phenomena that they are capable of without introducing unwanted interactions. We then successfully apply the model to a range of phenomena, including latent inhibition, perceptual learning, the Espinet effect, and first- and second-order retrospective revaluation. In some cases, we present new data for comparison with our model's predictions. In all cases, the model replicates the pattern observed in our experimental results. We conclude that this line of development is a promising one for arriving at general theories of associative learning and memory.

  16. Wind-US Code Physical Modeling Improvements to Complement Hypersonic Testing and Evaluation

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Yoder, Dennis A.; Towne, Charles S.; Engblom, William A.; Bhagwandin, Vishal A.; Power, Greg D.; Lankford, Dennis W.; Nelson, Christopher C.

    2009-01-01

    This report gives an overview of physical modeling enhancements to the Wind-US flow solver which were made to improve the capabilities for simulation of hypersonic flows and the reliability of computations to complement hypersonic testing. The improvements include advanced turbulence models, a bypass transition model, a conjugate (or closely coupled to vehicle structure) conduction-convection heat transfer capability, and an upgraded high-speed combustion solver. A Mach 5 shock-wave boundary layer interaction problem is used to investigate the benefits of k- s and k-w based explicit algebraic stress turbulence models relative to linear two-equation models. The bypass transition model is validated using data from experiments for incompressible boundary layers and a Mach 7.9 cone flow. The conjugate heat transfer method is validated for a test case involving reacting H2-O2 rocket exhaust over cooled calorimeter panels. A dual-mode scramjet configuration is investigated using both a simplified 1-step kinetics mechanism and an 8-step mechanism. Additionally, variations in the turbulent Prandtl and Schmidt numbers are considered for this scramjet configuration.

  17. Evaluation research on jiangsu green economy development capability: a case study of Xuzhou

    NASA Astrophysics Data System (ADS)

    Sun, Fuhua; Liu, Haiyu; Wang, Zhaoxia

    2018-02-01

    As a national leading province of economic development and demonstration area of ecological civilization construction, Jiangsu makes a scientific and rational evaluation to its green economy development capability through the construction of index system and model, which is significant for better grasping its green development condition, implementing the “green” development concept and promoting Jiangsu to be a new Jiangsu with “good economy, rich public, favourable environment and civilized society degree”. The paper constructs the evaluation system of green economic development capability based on factor analysis method, adjusts indexes at all levels through factor analysis, calculates the factor score, determines the main influencing factors, analyzes the influence factor score, and puts forward the corresponding policy according to the practical situation of Jiangsu Province.

  18. Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.

    1992-01-01

    The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.

  19. Control of Turing patterns and their usage as sensors, memory arrays, and logic gates

    NASA Astrophysics Data System (ADS)

    Muzika, František; Schreiber, Igor

    2013-10-01

    We study a model system of three diffusively coupled reaction cells arranged in a linear array that display Turing patterns with special focus on the case of equal coupling strength for all components. As a suitable model reaction we consider a two-variable core model of glycolysis. Using numerical continuation and bifurcation techniques we analyze the dependence of the system's steady states on varying rate coefficient of the recycling step while the coupling coefficients of the inhibitor and activator are fixed and set at the ratios 100:1, 1:1, and 4:5. We show that stable Turing patterns occur at all three ratios but, as expected, spontaneous transition from the spatially uniform steady state to the spatially nonuniform Turing patterns occurs only in the first case. The other two cases possess multiple Turing patterns, which are stabilized by secondary bifurcations and coexist with stable uniform periodic oscillations. For the 1:1 ratio we examine modular spatiotemporal perturbations, which allow for controllable switching between the uniform oscillations and various Turing patterns. Such modular perturbations are then used to construct chemical computing devices utilizing the multiple Turing patterns. By classifying various responses we propose: (a) a single-input resettable sensor capable of reading certain value of concentration, (b) two-input and three-input memory arrays capable of storing logic information, (c) three-input, three-output logic gates performing combinations of logical functions OR, XOR, AND, and NAND.

  20. Development of a GIS-based spill management information system.

    PubMed

    Martin, Paul H; LeBoeuf, Eugene J; Daniel, Edsel B; Dobbins, James P; Abkowitz, Mark D

    2004-08-30

    Spill Management Information System (SMIS) is a geographic information system (GIS)-based decision support system designed to effectively manage the risks associated with accidental or intentional releases of a hazardous material into an inland waterway. SMIS provides critical planning and impact information to emergency responders in anticipation of, or following such an incident. SMIS couples GIS and database management systems (DBMS) with the 2-D surface water model CE-QUAL-W2 Version 3.1 and the air contaminant model Computer-Aided Management of Emergency Operations (CAMEO) while retaining full GIS risk analysis and interpretive capabilities. Live 'real-time' data links are established within the spill management software to utilize current meteorological information and flowrates within the waterway. Capabilities include rapid modification of modeling conditions to allow for immediate scenario analysis and evaluation of 'what-if' scenarios. The functionality of the model is illustrated through a case study of the Cheatham Reach of the Cumberland River near Nashville, TN.

  1. Initial Implementation of Transient VERA-CS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerlach, Andrew; Kochunas, Brendan; Salko, Robert

    In this milestone the capabilities of both CTF and MPACT were extended to perform coupled transient calculations. This required several small changes in MPACT to setup the problems correctly, perform the edits correctly, and call the appropriate CTF interfaces in the right order. For CTF, revisions and corrections to the transient timestepping algorithm were made, as well as the addition of a new interface subroutine to allow MPACT to drive CTF at each timestep. With the modifications completed, the initial coupled capability was demonstrated on some problems used for code verification, a hypothetical small mini-core, and a Watts Bar demonstrationmore » problem. For each of these cases the results showed good agreement with the previous MPACT internal TH feedback model that relied on a simplified fuel heat conduction model and simplified coolant treatment. After the pulse the results are notably different as expected, where the effects of convection of heat to the coolant can be observed. Areas for future work were discussed, including assessment and development of the CTF dynamic fuel deformation and gap conductance models, addition of suitable transient boiling and CHF models for the rapid heating and cooling rates seen in RIAs, additional validation and demonstration work, and areas for improvement to the code input and output capabilities.« less

  2. High Fidelity Modeling of Turbulent Mixing and Chemical Kinetics Interactions in a Post-Detonation Flow Field

    NASA Astrophysics Data System (ADS)

    Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael

    2015-06-01

    Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.

  3. Unclassified Computing Capability: User Responses to a Multiprogrammatic and Institutional Computing Questionnaire

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, M; Kissel, L

    2002-01-29

    We are experimenting with a new computing model to be applied to a new computer dedicated to that model. Several LLNL science teams now have computational requirements, evidenced by the mature scientific applications that have been developed over the past five plus years, that far exceed the capability of the institution's computing resources. Thus, there is increased demand for dedicated, powerful parallel computational systems. Computation can, in the coming year, potentially field a capability system that is low cost because it will be based on a model that employs open source software and because it will use PC (IA32-P4) hardware.more » This incurs significant computer science risk regarding stability and system features but also presents great opportunity. We believe the risks can be managed, but the existence of risk cannot be ignored. In order to justify the budget for this system, we need to make the case that it serves science and, through serving science, serves the institution. That is the point of the meeting and the White Paper that we are proposing to prepare. The questions are listed and the responses received are in this report.« less

  4. Influenza forecasting with Google Flu Trends.

    PubMed

    Dugas, Andrea Freyer; Jalalpour, Mehdi; Gel, Yulia; Levin, Scott; Torcaso, Fred; Igusa, Takeru; Rothman, Richard E

    2013-01-01

    We developed a practical influenza forecast model based on real-time, geographically focused, and easy to access data, designed to provide individual medical centers with advanced warning of the expected number of influenza cases, thus allowing for sufficient time to implement interventions. Secondly, we evaluated the effects of incorporating a real-time influenza surveillance system, Google Flu Trends, and meteorological and temporal information on forecast accuracy. Forecast models designed to predict one week in advance were developed from weekly counts of confirmed influenza cases over seven seasons (2004-2011) divided into seven training and out-of-sample verification sets. Forecasting procedures using classical Box-Jenkins, generalized linear models (GLM), and generalized linear autoregressive moving average (GARMA) methods were employed to develop the final model and assess the relative contribution of external variables such as, Google Flu Trends, meteorological data, and temporal information. A GARMA(3,0) forecast model with Negative Binomial distribution integrating Google Flu Trends information provided the most accurate influenza case predictions. The model, on the average, predicts weekly influenza cases during 7 out-of-sample outbreaks within 7 cases for 83% of estimates. Google Flu Trend data was the only source of external information to provide statistically significant forecast improvements over the base model in four of the seven out-of-sample verification sets. Overall, the p-value of adding this external information to the model is 0.0005. The other exogenous variables did not yield a statistically significant improvement in any of the verification sets. Integer-valued autoregression of influenza cases provides a strong base forecast model, which is enhanced by the addition of Google Flu Trends confirming the predictive capabilities of search query based syndromic surveillance. This accessible and flexible forecast model can be used by individual medical centers to provide advanced warning of future influenza cases.

  5. Optimized production planning model for a multi-plant cultivation system under uncertainty

    NASA Astrophysics Data System (ADS)

    Ke, Shunkui; Guo, Doudou; Niu, Qingliang; Huang, Danfeng

    2015-02-01

    An inexact multi-constraint programming model under uncertainty was developed by incorporating a production plan algorithm into the crop production optimization framework under the multi-plant collaborative cultivation system. In the production plan, orders from the customers are assigned to a suitable plant under the constraints of plant capabilities and uncertainty parameters to maximize profit and achieve customer satisfaction. The developed model and solution method were applied to a case study of a multi-plant collaborative cultivation system to verify its applicability. As determined in the case analysis involving different orders from customers, the period of plant production planning and the interval between orders can significantly affect system benefits. Through the analysis of uncertain parameters, reliable and practical decisions can be generated using the suggested model of a multi-plant collaborative cultivation system.

  6. Functional Risk Modeling for Lunar Surface Systems

    NASA Technical Reports Server (NTRS)

    Thomson, Fraser; Mathias, Donovan; Go, Susie; Nejad, Hamed

    2010-01-01

    We introduce an approach to risk modeling that we call functional modeling , which we have developed to estimate the capabilities of a lunar base. The functional model tracks the availability of functions provided by systems, in addition to the operational state of those systems constituent strings. By tracking functions, we are able to identify cases where identical functions are provided by elements (rovers, habitats, etc.) that are connected together on the lunar surface. We credit functional diversity in those cases, and in doing so compute more realistic estimates of operational mode availabilities. The functional modeling approach yields more realistic estimates of the availability of the various operational modes provided to astronauts by the ensemble of surface elements included in a lunar base architecture. By tracking functional availability the effects of diverse backup, which often exists when two or more independent elements are connected together, is properly accounted for.

  7. Effect of casing yield stress on bomb blast impulse

    NASA Astrophysics Data System (ADS)

    Hutchinson, M. D.

    2012-08-01

    An equation to predict blast effects from cased charges was first proposed by U. Fano in 1944 and revised by E.M. Fisher in 1953 [1]. Fisher's revision provides much better matches to available blast impulse data, but still requires empirical parameter adjustments. A new derivation [2], based on the work of R.W. Gurney [3] and G.I. Taylor [4], has resulted in an equation which nearly matches experimental data. This new analytical model is also capable of being extended, through the incorporation of additional physics, such as the effects of early case fracture, finite casing thickness, casing metal strain energy dissipation, explosive gas escape through casing fractures and the comparative dynamics of blast wave and metal fragment impacts. This paper will focus on the choice of relevant case fracture strain criterion, as it will be shown that this allows the explicit inclusion of the dynamic properties of the explosive and casing metal. It will include a review and critique of the most significant earlier work on this topic, contained in a paper by Hoggatt and Recht [5]. Using this extended analytical model, good matches can readily be made to available free-field blast impulse data, without any empirical adjustments being needed. Further work will be required to apply this model to aluminised and other highly oxygen-deficient explosives.

  8. Army Sustainment Capabilities in FOrced Entry Operations: The Impact of Private Contracting on Army Sustainment’s Capabilities to Sustain Forces in Forced Entry Operations

    DTIC Science & Technology

    2012-06-08

    contractors and U.S. Army sustainment capabilities. These two cases suggest a need to maintain the correct balance of military sustainment capabilities...cases suggest a need to maintain the correct balance of military sustainment capabilities with maneuver forces in the U.S. Army. Not achieving this...a renewed focus to down size the U.S. Army. This monograph seeks to warn Army leaders that finding a correct balance between readiness to respond to

  9. Parallel Unsteady Turbopump Simulations for Liquid Rocket Engines

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin C.; Kwak, Dochan; Chan, William

    2000-01-01

    This paper reports the progress being made towards complete turbo-pump simulation capability for liquid rocket engines. Space Shuttle Main Engine (SSME) turbo-pump impeller is used as a test case for the performance evaluation of the MPI and hybrid MPI/Open-MP versions of the INS3D code. Then, a computational model of a turbo-pump has been developed for the shuttle upgrade program. Relative motion of the grid system for rotor-stator interaction was obtained by employing overset grid techniques. Time-accuracy of the scheme has been evaluated by using simple test cases. Unsteady computations for SSME turbo-pump, which contains 136 zones with 35 Million grid points, are currently underway on Origin 2000 systems at NASA Ames Research Center. Results from time-accurate simulations with moving boundary capability, and the performance of the parallel versions of the code will be presented in the final paper.

  10. Technical and economic assessment of processes for the production of butanol and acetone

    NASA Technical Reports Server (NTRS)

    1982-01-01

    This report represents a preliminary technical and economic evaluation of a process which produces mixed solvents (butaol/acetone/ethanol) via fermentation of sugars derived from renewable biomass resources. The objective is to assess the technology of producing butanol/acetone from biomass, and select a viable process capable of serving as a base case model for technical and economic analysis. It is anticipated that the base case process developed herein can then be used as the basis for subsequent studies concerning biomass conversion processes capable of producing a wide range of chemicals. The general criteria utilized in determining the design basis for the process are profit potential and non-renewable energy displacement potential. The feedstock chosen, aspen wood, was selected from a number of potential renewable biomass resources as the most readily available in the United States and for its relatively large potential for producing reducing sugars.

  11. Visualizing and modelling complex rockfall slopes using game-engine hosted models

    NASA Astrophysics Data System (ADS)

    Ondercin, Matthew; Hutchinson, D. Jean; Harrap, Rob

    2015-04-01

    Innovations in computing in the past few decades have resulted in entirely new ways to collect 3d geological data and visualize it. For example, new tools and techniques relying on high performance computing capabilities have become widely available, allowing us to model rockfalls with more attention to complexity of the rock slope geometry and rockfall path, with significantly higher quality base data, and with more analytical options. Model results are used to design mitigation solutions, considering the potential paths of the rockfall events and the energy they impart on impacted structures. Such models are currently implemented as general-purpose GIS tools and in specialized programs. These tools are used to inspect geometrical and geomechanical data, model rockfalls, and communicate results to researchers and the larger community. The research reported here explores the notion that 3D game engines provide a high speed, widely accessible platform on which to build rockfall modelling workflows and to provide a new and accessible outreach method. Taking advantage of the in-built physics capability of the 3D game codes, and ability to handle large terrains, these models are rapidly deployed and generate realistic visualizations of rockfall trajectories. Their utility in this area is as yet unproven, but preliminary research shows that they are capable of producing results that are comparable to existing approaches. Furthermore, modelling of case histories shows that the output matches the behaviour that is observed in the field. The key advantage of game-engine hosted models is their accessibility to the general public and to people with little to no knowledge of rockfall hazards. With much of the younger generation being very familiar with 3D environments such as Minecraft, the idea of a game-like simulation is intuitive and thus offers new ways to communicate to the general public. We present results from using the Unity game engine to develop 3D voxel worlds and terrain models from detailed LiDAR and photogrammetric data obtained at a complex slope above a railway corridor in British Columbia, Canada. The data was collected with sufficient frequency that single event rockfall paths were detectable, permitting the impact points and the final resting spots to be determined using LiDAR change detection methods. These specific case histories, including the high resolution, detailed slope geometry from the LiDAR data sets were modelled using game engines, as well as the conventional GIS based and specific rockfall modelling packages. The game engine results compare favourably and in some case outperform conventional tools in terms of rockfall trajectory and slope accuracy, physical realism, data handling capacity, and performance.

  12. The Effect of Teachers' Shared Leadership Perception on Academic Optimism and Organizational Citizenship Behaviour: A Turkish Case

    ERIC Educational Resources Information Center

    Akin Kösterelioglu, Meltem

    2017-01-01

    Purpose: The present study investigates the capability of high school teachers' shared leadership perception to predict the academic optimism and organizational citizenship levels. Research methods: The population of the current descriptive study, which was conducted via screening model, consists of 321 high school teachers working for Amasya…

  13. Motivating Students to Offer Their Best: Evidence Based Effective Course Design

    ERIC Educational Resources Information Center

    Stearns, Susan A.

    2013-01-01

    Sometimes we question whether students are incapable or capable and/or willing or unwilling in regards to their academics. This study determined where students lie in regards to these concepts and showed one example of motivating students to do their best via course design, in this particular case by the use of a writing process model.

  14. Common world model for unmanned systems: Phase 2

    NASA Astrophysics Data System (ADS)

    Dean, Robert M. S.; Oh, Jean; Vinokurov, Jerry

    2014-06-01

    The Robotics Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using semantic and symbolic as well as metric information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines to address Symbol Grounding and Uncertainty. The Common World Model must understand how these objects relate to each other. It includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and their histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model also includes models of how entities in the environment behave which enable prediction of future world states. To manage complexity, we have adopted a phased implementation approach. Phase 1, published in these proceedings in 2013 [1], presented the approach for linking metric with symbolic information and interfaces for traditional planners and cognitive reasoning. Here we discuss the design of "Phase 2" of this world model, which extends the Phase 1 design API, data structures, and reviews the use of the Common World Model as part of a semantic navigation use case.

  15. Making the Case for Reusable Booster Systems: The Operations Perspective

    NASA Technical Reports Server (NTRS)

    Zapata, Edgar

    2012-01-01

    Presentation to the Aeronautics Space Engineering Board National Research Council Reusable Booster System: Review and Assessment Committee. Addresses: the criteria and assumptions used in the formulation of current RBS plans; the methodologies used in the current cost estimates for RBS; the modeling methodology used to frame the business case for an RBS capability including: the data used in the analysis, the models' robustness if new data become available, and the impact of unclassified government data that was previously unavailable and which will be supplied by the USAF; the technical maturity of key elements critical to RBS implementation and the ability of current technology development plans to meet technical readiness milestones.

  16. A new approach to predict soil temperature under vegetated surfaces.

    PubMed

    Dolschak, Klaus; Gartner, Karl; Berger, Torsten W

    2015-12-01

    In this article, the setup and the application of an empirical model, based on Newton's law of cooling, capable to predict daily mean soil temperature ( T soil ) under vegetated surfaces, is described. The only input variable, necessary to run the model, is a time series of daily mean air temperature. The simulator employs 9 empirical parameters, which were estimated by inverse modeling. The model, which primarily addresses forested sites, incorporates the effect of snow cover and soil freezing on soil temperature. The model was applied to several temperate forest sites, managing the split between Central Europe (Austria) and the United States (Harvard Forest, Massachusetts; Hubbard Brook, New Hampshire), aiming to cover a broad range of site characteristics. Investigated stands differ fundamentally in stand composition, elevation, exposition, annual mean temperature, precipitation regime, as well as in the duration of winter snow cover. At last, to explore the limits of the formulation, the simulator was applied to non-forest sites (Illinois), where soil temperature was recorded under short cut grass. The model was parameterized, specifically to site and measurement depth. After calibration of the model, an evaluation was performed, using ~50 % of the available data. In each case, the simulator was capable to deliver a feasible prediction of soil temperature in the validation time interval. To evaluate the practical suitability of the simulator, the minimum amount of soil temperature point measurements, necessary to yield expedient model performance was determined. In the investigated case 13-20 point observations, uniformly distributed within an 11-year timeframe, have been proven sufficient to yield sound model performance (root mean square error <0.9 °C, Nash-Sutcliffe efficiency >0.97). This makes the model suitable for the application on sites, where the information on soil temperature is discontinuous or scarce.

  17. Turbofan Duct Propagation Model

    NASA Technical Reports Server (NTRS)

    Lan, Justin H.; Posey, Joe W. (Technical Monitor)

    2001-01-01

    The CDUCT code utilizes a parabolic approximation to the convected Helmholtz equation in order to efficiently model acoustic propagation in acoustically treated, complex shaped ducts. The parabolic approximation solves one-way wave propagation with a marching method which neglects backwards reflected waves. The derivation of the parabolic approximation is presented. Several code validation cases are given. An acoustic lining design process for an example aft fan duct is discussed. It is noted that the method can efficiently model realistic three-dimension effects, acoustic lining, and flow within the computational capabilities of a typical computer workstation.

  18. Validating agent oriented methodology (AOM) for netlogo modelling and simulation

    NASA Astrophysics Data System (ADS)

    WaiShiang, Cheah; Nissom, Shane; YeeWai, Sim; Sharbini, Hamizan

    2017-10-01

    AOM (Agent Oriented Modeling) is a comprehensive and unified agent methodology for agent oriented software development. AOM methodology was proposed to aid developers with the introduction of technique, terminology, notation and guideline during agent systems development. Although AOM methodology is claimed to be capable of developing a complex real world system, its potential is yet to be realized and recognized by the mainstream software community and the adoption of AOM is still at its infancy. Among the reason is that there are not much case studies or success story of AOM. This paper presents two case studies on the adoption of AOM for individual based modelling and simulation. It demonstrate how the AOM is useful for epidemiology study and ecological study. Hence, it further validate the AOM in a qualitative manner.

  19. 76 FR 71341 - BASINS and WEPP Climate Assessment Tools: Case Study Guide to Potential Applications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-17

    ... report presents a series of short case studies designed to illustrate the capabilities of these tools for... change impacts on water. This report presents a series of short case studies using the BASINS and WEPP climate assessment tools. The case studies are designed to illustrate the capabilities of these tools for...

  20. Reflexion on linear regression trip production modelling method for ensuring good model quality

    NASA Astrophysics Data System (ADS)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  1. New single-aircraft integrated atmospheric observation capabilities

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2011-12-01

    Improving current weather and climate model capabilities requires better understandings of many atmospheric processes. Thus, advancing atmospheric observation capabilities has been regarded as the highest imperatives to advance the atmospheric science in the 21st century. Under the NSF CAREER support, we focus on developing new airborne observation capabilities through the developments of new instrumentations and the single-aircraft integration of multiple remote sensors with in situ probes. Two compact Wyoming cloud lidars were built to work together with a 183 GHz microwave radiometer, a multi-beam Wyoming cloud radar and in situ probes for cloud studies. The synergy of these remote sensor measurements allows us to better resolve the vertical structure of cloud microphysical properties and cloud scale dynamics. Together with detailed in situ data for aerosol, cloud, water vapor and dynamics, we developed the most advanced observational capability to study cloud-scale properties and processes from a single aircraft (Fig. 1). A compact Raman lidar was also built to work together with in situ sampling to characterize boundary layer aerosol and water vapor distributions for many important atmospheric processes studies, such as, air-sea interaction and convective initialization. Case studies will be presented to illustrate these new observation capabilities.

  2. Decompression models: review, relevance and validation capabilities.

    PubMed

    Hugon, J

    2014-01-01

    For more than a century, several types of mathematical models have been proposed to describe tissue desaturation mechanisms in order to limit decompression sickness. These models are statistically assessed by DCS cases, and, over time, have gradually included bubble formation biophysics. This paper proposes to review this evolution and discuss its limitations. This review is organized around the comparison of decompression model biophysical criteria and theoretical foundations. Then, the DCS-predictive capability was analyzed to assess whether it could be improved by combining different approaches. Most of the operational decompression models have a neo-Haldanian form. Nevertheless, bubble modeling has been gaining popularity, and the circulating bubble amount has become a major output. By merging both views, it seems possible to build a relevant global decompression model that intends to simulate bubble production while predicting DCS risks for all types of exposures and decompression profiles. A statistical approach combining both DCS and bubble detection databases has to be developed to calibrate a global decompression model. Doppler ultrasound and DCS data are essential: i. to make correlation and validation phases reliable; ii. to adjust biophysical criteria to fit at best the observed bubble kinetics; and iii. to build a relevant risk function.

  3. STAGS Example Problems Manual

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Rankin, Charles C.

    2006-01-01

    This document summarizes the STructural Analysis of General Shells (STAGS) development effort, STAGS performance for selected demonstration problems, and STAGS application problems illustrating selected advanced features available in the STAGS Version 5.0. Each problem is discussed including selected background information and reference solutions when available. The modeling and solution approach for each problem is described and illustrated. Numerical results are presented and compared with reference solutions, test data, and/or results obtained from mesh refinement studies. These solutions provide an indication of the overall capabilities of the STAGS nonlinear finite element analysis tool and provide users with representative cases, including input files, to explore these capabilities that may then be tailored to other applications.

  4. ALEX neutral beam probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pourrezaei, K.

    1982-01-01

    A neutral beam probe capable of measuring plasma space potential in a fully 3-dimensional magnetic field geometry has been developed. This neutral beam was successfully used to measure an arc target plasma contained within the ALEX baseball magnetic coil. A computer simulation of the experiment was performed to refine the experimental design and to develop a numerical model for scaling the ALEX neutral beam probe to other cases of fully 3-dimensional magnetic field. Based on this scaling a 30 to 50 keV neutral cesium beam probe capable of measuring space potential in the thermal barrier region of TMX Upgrade wasmore » designed.« less

  5. Layerwise Finite Elements for Smart Piezoceramic Composite Plates in Thermal Environments

    NASA Technical Reports Server (NTRS)

    Saravanos, Dimitris A.; Lee, Ho-Jun

    1996-01-01

    Analytical formulations are presented which account for the coupled mechanical, electrical, and thermal response of piezoelectric composite laminates and plate structures. A layerwise theory is formulated with the inherent capability to explicitly model the active and sensory response of piezoelectric composite plates having arbitrary laminate configurations in thermal environments. Finite element equations are derived and implemented for a bilinear 4-noded plate element. Application cases demonstrate the capability to manage thermally induced bending and twisting deformations in symmetric and antisymmetric composite plates with piezoelectric actuators, and show the corresponding electrical response of distributed piezoelectric sensors. Finally, the resultant stresses in the thermal piezoelectric composite laminates are investigated.

  6. A case study of development and application of a streamlined control and response modeling system for PM2.5 attainment assessment in China.

    PubMed

    Long, Shicheng; Zhu, Yun; Jang, Carey; Lin, Che-Jen; Wang, Shuxiao; Zhao, Bin; Gao, Jian; Deng, Shuang; Xie, Junping; Qiu, Xuezhen

    2016-03-01

    This article describes the development and application of a streamlined air control and response modeling system with a novel response surface modeling-linear coupled fitting method and a new module to provide streamlined model data for PM2.5 attainment assessment in China. This method is capable of significantly reducing the dimensions required to establish a response surface model, as well as capturing more realistic response of PM2.5 to emission changes with a limited number of model simulations. The newly developed module establishes a data link between the system and the Software for Model Attainment Test-Community Edition (SMAT-CE), and has the ability to rapidly provide model responses to emission control scenarios for SMAT-CE using a simple interface. The performance of this streamlined system is demonstrated through a case study of the Yangtze River Delta (YRD) in China. Our results show that this system is capable of reproducing the Community Multi-Scale Air Quality (CMAQ) model simulation results with maximum mean normalized error<3.5%. It is also demonstrated that primary emissions make a major contribution to ambient levels of PM2.5 in January and August (e.g., more than 50% contributed by primary emissions in Shanghai), and Shanghai needs to have regional emission control both locally and in its neighboring provinces to meet China's annual PM2.5 National Ambient Air Quality Standard. The streamlined system provides a real-time control/response assessment to identify the contributions of major emission sources to ambient PM2.5 (and potentially O3 as well) and streamline air quality data for SMAT-CE to perform attainment assessments. Copyright © 2015. Published by Elsevier B.V.

  7. Enhanced TCAS 2/CDTI traffic Sensor digital simulation model and program description

    NASA Technical Reports Server (NTRS)

    Goka, T.

    1984-01-01

    Digital simulation models of enhanced TCAS 2/CDTI traffic sensors are developed, based on actual or projected operational and performance characteristics. Two enhanced Traffic (or Threat) Alert and Collision Avoidance Systems are considered. A digital simulation program is developed in FORTRAN. The program contains an executive with a semireal time batch processing capability. The simulation program can be interfaced with other modules with a minimum requirement. Both the traffic sensor and CAS logic modules are validated by means of extensive simulation runs. Selected validation cases are discussed in detail, and capabilities and limitations of the actual and simulated systems are noted. The TCAS systems are not specifically intended for Cockpit Display of Traffic Information (CDTI) applications. These systems are sufficiently general to allow implementation of CDTI functions within the real systems' constraints.

  8. Software Architecture: Managing Design for Achieving Warfighter Capability

    DTIC Science & Technology

    2007-04-30

    The Government’s requirements and specifications for a new weapon...at the Preliminary Design Review (PDR) is likely to have a much higher probability of meeting the warfighters’ need for capability. Test -case...inventories of test cases are developed from the user-defined scenarios so that there is one or more test case for every scenario. The test cases will

  9. The point-spread function measure of resolution for the 3-D electrical resistivity experiment

    NASA Astrophysics Data System (ADS)

    Oldenborger, Greg A.; Routh, Partha S.

    2009-02-01

    The solution appraisal component of the inverse problem involves investigation of the relationship between our estimated model and the actual model. However, full appraisal is difficult for large 3-D problems such as electrical resistivity tomography (ERT). We tackle the appraisal problem for 3-D ERT via the point-spread functions (PSFs) of the linearized resolution matrix. The PSFs represent the impulse response of the inverse solution and quantify our parameter-specific resolving capability. We implement an iterative least-squares solution of the PSF for the ERT experiment, using on-the-fly calculation of the sensitivity via an adjoint integral equation with stored Green's functions and subgrid reduction. For a synthetic example, analysis of individual PSFs demonstrates the truly 3-D character of the resolution. The PSFs for the ERT experiment are Gaussian-like in shape, with directional asymmetry and significant off-diagonal features. Computation of attributes representative of the blurring and localization of the PSF reveal significant spatial dependence of the resolution with some correlation to the electrode infrastructure. Application to a time-lapse ground-water monitoring experiment demonstrates the utility of the PSF for assessing feature discrimination, predicting artefacts and identifying model dependence of resolution. For a judicious selection of model parameters, we analyse the PSFs and their attributes to quantify the case-specific localized resolving capability and its variability over regions of interest. We observe approximate interborehole resolving capability of less than 1-1.5m in the vertical direction and less than 1-2.5m in the horizontal direction. Resolving capability deteriorates significantly outside the electrode infrastructure.

  10. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  11. a Study on Satellite Diagnostic Expert Systems Using Case-Based Approach

    NASA Astrophysics Data System (ADS)

    Park, Young-Tack; Kim, Jae-Hoon; Park, Hyun-Soo

    1997-06-01

    Many research works are on going to monitor and diagnose diverse malfunctions of satellite systems as the complexity and number of satellites increase. Currently, many works on monitoring and diagnosis are carried out by human experts but there are needs to automate much of the routine works of them. Hence, it is necessary to study on using expert systems which can assist human experts routine work by doing automatically, thereby allow human experts devote their expertise more critical and important areas of monitoring and diagnosis. In this paper, we are employing artificial intelligence techniques to model human experts' knowledge and inference the constructed knowledge. Especially, case-based approaches are used to construct a knowledge base to model human expert capabilities which use previous typical exemplars. We have designed and implemented a prototype case-based system for diagnosing satellite malfunctions using cases. Our system remembers typical failure cases and diagnoses a current malfunction by indexing the case base. Diverse methods are used to build a more user friendly interface which allows human experts can build a knowledge base in as easy way.

  12. Model to predict hyperbilirubinemia in healthy term and near-term newborns with exclusive breast feeding.

    PubMed

    Huang, Hsin-Chung; Yang, Hwai-I; Chang, Yu-Hsun; Chang, Rui-Jane; Chen, Mei-Huei; Chen, Chien-Yi; Chou, Hung-Chieh; Hsieh, Wu-Shiun; Tsao, Po-Nien

    2012-12-01

    The aim of this study was to identify high-risk newborns who will subsequently develop significant hyperbilirubinemia Days 4 to 10 of life by using the clinical data from the first three days of life. We retrospectively collected exclusively breastfeeding healthy term and near-term newborns born in our nursery between May 1, 2002, to June 30, 2005. Clinical data, including serum bilirubin were collected and the significant predictors were identified. Bilirubin level ≥15mg/dL during Days 4 to 10 of life was defined as significant hyperbilirubinemia. A prediction model to predict subsequent hyperbilirubinemia was established. This model was externally validated in another group of newborns who were enrolled by the same criteria to test its discrimination capability. Totally, 1979 neonates were collected and 1208 cases were excluded by our exclusion criteria. Finally, 771 newborns were enrolled and 182 (23.6%) cases developed significant hyperbilirubinemia during Days 4 to 10 of life. In the logistic regression analysis, gestational age, maximal body weight loss percentage, and peak bilirubin level during the first 72 hours of life were significantly associated with subsequent hyperbilirubinemia. A prediction model was derived with the area under receiver operating characteristic (AUROC) curve of 0.788. Model validation in the separate study (N = 209) showed similar discrimination capability (AUROC = 0.8340). Gestational age, maximal body weight loss percentage, and peak serum bilirubin level during the first 3 days of life have highest predictive value of subsequent significant hyperbilirubinemia. We provide a good model to predict the risk of subsequent significant hyperbilirubinemia. Copyright © 2012. Published by Elsevier B.V.

  13. Novel Formulation of Adaptive MPC as EKF Using ANN Model: Multiproduct Semibatch Polymerization Reactor Case Study.

    PubMed

    Kamesh, Reddi; Rani, Kalipatnapu Yamuna

    2017-12-01

    In this paper, a novel formulation for nonlinear model predictive control (MPC) has been proposed incorporating the extended Kalman filter (EKF) control concept using a purely data-driven artificial neural network (ANN) model based on measurements for supervisory control. The proposed scheme consists of two modules focusing on online parameter estimation based on past measurements and control estimation over control horizon based on minimizing the deviation of model output predictions from set points along the prediction horizon. An industrial case study for temperature control of a multiproduct semibatch polymerization reactor posed as a challenge problem has been considered as a test bed to apply the proposed ANN-EKFMPC strategy at supervisory level as a cascade control configuration along with proportional integral controller [ANN-EKFMPC with PI (ANN-EKFMPC-PI)]. The proposed approach is formulated incorporating all aspects of MPC including move suppression factor for control effort minimization and constraint-handling capability including terminal constraints. The nominal stability analysis and offset-free tracking capabilities of the proposed controller are proved. Its performance is evaluated by comparison with a standard MPC-based cascade control approach using the same adaptive ANN model. The ANN-EKFMPC-PI control configuration has shown better controller performance in terms of temperature tracking, smoother input profiles, as well as constraint-handling ability compared with the ANN-MPC with PI approach for two products in summer and winter. The proposed scheme is found to be versatile although it is based on a purely data-driven model with online parameter estimation.

  14. Model Based Mission Assurance: Emerging Opportunities for Robotic Systems

    NASA Technical Reports Server (NTRS)

    Evans, John W.; DiVenti, Tony

    2016-01-01

    The emergence of Model Based Systems Engineering (MBSE) in a Model Based Engineering framework has created new opportunities to improve effectiveness and efficiencies across the assurance functions. The MBSE environment supports not only system architecture development, but provides for support of Systems Safety, Reliability and Risk Analysis concurrently in the same framework. Linking to detailed design will further improve assurance capabilities to support failures avoidance and mitigation in flight systems. This also is leading new assurance functions including model assurance and management of uncertainty in the modeling environment. Further, the assurance cases, a structured hierarchal argument or model, are emerging as a basis for supporting a comprehensive viewpoint in which to support Model Based Mission Assurance (MBMA).

  15. Simulation of dual carbon-bromine stable isotope fractionation during 1,2-dibromoethane degradation.

    PubMed

    Jin, Biao; Nijenhuis, Ivonne; Rolle, Massimo

    2018-06-01

    We performed a model-based investigation to simultaneously predict the evolution of concentration, as well as stable carbon and bromine isotope fractionation during 1,2-dibromoethane (EDB, ethylene dibromide) transformation in a closed system. The modelling approach considers bond-cleavage mechanisms during different reactions and allows evaluating dual carbon-bromine isotopic signals for chemical and biotic reactions, including aerobic and anaerobic biological transformation, dibromoelimination by Zn(0) and alkaline hydrolysis. The proposed model allowed us to accurately simulate the evolution of concentrations and isotope data observed in a previous laboratory study and to successfully identify different reaction pathways. Furthermore, we illustrated the model capabilities in degradation scenarios involving complex reaction systems. Specifically, we examined (i) the case of sequential multistep transformation of EDB and the isotopic evolution of the parent compound, the intermediate and the reaction product and (ii) the case of parallel competing abiotic pathways of EDB transformation in alkaline solution.

  16. Tomographic imaging of non-local media based on space-fractional diffusion models

    NASA Astrophysics Data System (ADS)

    Buonocore, Salvatore; Semperlotti, Fabio

    2018-06-01

    We investigate a generalized tomographic imaging framework applicable to a class of inhomogeneous media characterized by non-local diffusive energy transport. Under these conditions, the transport mechanism is well described by fractional-order continuum models capable of capturing anomalous diffusion that would otherwise remain undetected when using traditional integer-order models. Although the underlying idea of the proposed framework is applicable to any transport mechanism, the case of fractional heat conduction is presented as a specific example to illustrate the methodology. By using numerical simulations, we show how complex inhomogeneous media involving non-local transport can be successfully imaged if fractional order models are used. In particular, results will show that by properly recognizing and accounting for the fractional character of the host medium not only allows achieving increased resolution but, in case of strong and spatially distributed non-locality, it represents the only viable approach to achieve a successful reconstruction.

  17. Ultrasensitivity in signaling cascades revisited: Linking local and global ultrasensitivity estimations.

    PubMed

    Altszyler, Edgar; Ventura, Alejandra C; Colman-Lerner, Alejandro; Chernomoretz, Ariel

    2017-01-01

    Ultrasensitive response motifs, capable of converting graded stimuli into binary responses, are well-conserved in signal transduction networks. Although it has been shown that a cascade arrangement of multiple ultrasensitive modules can enhance the system's ultrasensitivity, how a given combination of layers affects a cascade's ultrasensitivity remains an open question for the general case. Here, we introduce a methodology that allows us to determine the presence of sequestration effects and to quantify the relative contribution of each module to the overall cascade's ultrasensitivity. The proposed analysis framework provides a natural link between global and local ultrasensitivity descriptors and it is particularly well-suited to characterize and understand mathematical models used to study real biological systems. As a case study, we have considered three mathematical models introduced by O'Shaughnessy et al. to study a tunable synthetic MAPK cascade, and we show how our methodology can help modelers better understand alternative models.

  18. Ultrasensitivity in signaling cascades revisited: Linking local and global ultrasensitivity estimations

    PubMed Central

    Altszyler, Edgar; Ventura, Alejandra C.; Colman-Lerner, Alejandro; Chernomoretz, Ariel

    2017-01-01

    Ultrasensitive response motifs, capable of converting graded stimuli into binary responses, are well-conserved in signal transduction networks. Although it has been shown that a cascade arrangement of multiple ultrasensitive modules can enhance the system’s ultrasensitivity, how a given combination of layers affects a cascade’s ultrasensitivity remains an open question for the general case. Here, we introduce a methodology that allows us to determine the presence of sequestration effects and to quantify the relative contribution of each module to the overall cascade’s ultrasensitivity. The proposed analysis framework provides a natural link between global and local ultrasensitivity descriptors and it is particularly well-suited to characterize and understand mathematical models used to study real biological systems. As a case study, we have considered three mathematical models introduced by O’Shaughnessy et al. to study a tunable synthetic MAPK cascade, and we show how our methodology can help modelers better understand alternative models. PMID:28662096

  19. Modeling AWSoM CMEs with EEGGL: A New Approach for Space Weather Forecasting

    NASA Astrophysics Data System (ADS)

    Jin, M.; Manchester, W.; van der Holst, B.; Sokolov, I.; Toth, G.; Vourlidas, A.; de Koning, C. A.; Gombosi, T. I.

    2015-12-01

    The major source of destructive space weather is coronal mass ejections (CMEs). However, our understanding of CMEs and their propagation in the heliosphere is limited by the insufficient observations. Therefore, the development of first-principals numerical models plays a vital role in both theoretical investigation and providing space weather forecasts. Here, we present results of the simulation of CME propagation from the Sun to 1AU by combining the analytical Gibson & Low (GL) flux rope model with the state-of-art solar wind model AWSoM. We also provide an approach for transferring this research model to a space weather forecasting tool by demonstrating how the free parameters of the GL flux rope can be prescribed based on remote observations via the new Eruptive Event Generator by Gibson-Low (EEGGL) toolkit. This capability allows us to predict the long-term evolution of the CME in interplanetary space. We perform proof-of-concept case studies to show the capability of the model to capture physical processes that determine CME evolution while also reproducing many observed features both in the corona and at 1 AU. We discuss the potential and limitations of this model as a future space weather forecasting tool.

  20. Marking Machinima: A Case Study in Assessing Student Use of a Web 2.0 Technology

    ERIC Educational Resources Information Center

    Barwell, Graham; Moore, Chris; Walker, Ruth

    2011-01-01

    The model of learning best suited to the future may be one which sees learning as the process of managing the different kinds of participation an individual might have in complex social systems. Learning capability and engagement is thus dependent on the relationship between an individual identity and social systems. We report on the incorporation…

  1. Students with Autism in Regular Classes: A Long-Term Follow-Up Study of a Satellite Class Transition Model

    ERIC Educational Resources Information Center

    Keane, Elaine; Aldridge, Fiona Jane; Costley, Debra; Clark, Trevor

    2012-01-01

    Students with autism spectrum disorders (ASDs) are increasingly being educated within mainstream schools. While there is often an assumption that students with ASD who are academically capable will succeed in an inclusive educational placement, previous research has indicated that this is not always the case. Indeed, it seems that students with…

  2. Temperature-Dependent Short-Circuit Capability of Silicon Carbide Power MOSFETs

    DOE PAGES

    Wang, Zhiqiang; Shi, Xiaojie; Tolbert, Leon M.; ...

    2016-02-01

    Our paper presents a comprehensive short-circuit ruggedness evaluation and numerical investigation of up-to-date commercial silicon carbide (SiC) MOSFETs. The short-circuit capability of three types of commercial 1200-V SiC MOSFETs is tested under various conditions, with case temperatures from 25 to 200 degrees C and dc bus voltages from 400 to 750 V. It is found that the commercial SiC MOSFETs can withstand short-circuit current for only several microseconds with a dc bus voltage of 750 V and case temperature of 200 degrees C. Moreover, the experimental short-circuit behaviors are compared, and analyzed through numerical thermal dynamic simulation. Specifically, an electrothermalmore » model is built to estimate the device internal temperature distribution, considering the temperature-dependent thermal properties of SiC material. Based on the temperature information, a leakage current model is derived to calculate the main leakage current components (i.e., thermal, diffusion, and avalanche generation currents). Finally, numerical results show that the short-circuit failure mechanisms of SiC MOSFETs can be thermal generation current induced thermal runaway or high-temperature-related gate oxide damage.« less

  3. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    NASA Technical Reports Server (NTRS)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  4. Multisource data assimilation in a Richards equation-based integrated hydrological model: a real-world application to an experimental hillslope

    NASA Astrophysics Data System (ADS)

    Camporese, M.; Botto, A.

    2017-12-01

    Data assimilation is becoming increasingly popular in hydrological and earth system modeling, as it allows for direct integration of multisource observation data in modeling predictions and uncertainty reduction. For this reason, data assimilation has been recently the focus of much attention also for integrated surface-subsurface hydrological models, whereby multiple terrestrial compartments (e.g., snow cover, surface water, groundwater) are solved simultaneously, in an attempt to tackle environmental problems in a holistic approach. Recent examples include the joint assimilation of water table, soil moisture, and river discharge measurements in catchment models of coupled surface-subsurface flow using the ensemble Kalman filter (EnKF). Although the EnKF has been specifically developed to deal with nonlinear models, integrated hydrological models based on the Richards equation still represent a challenge, due to strong nonlinearities that may significantly affect the filter performance. Thus, more studies are needed to investigate the capabilities of EnKF to correct the system state and identify parameters in cases where the unsaturated zone dynamics are dominant. Here, the model CATHY (CATchment HYdrology) is applied to reproduce the hydrological dynamics observed in an experimental hillslope, equipped with tensiometers, water content reflectometer probes, and tipping bucket flow gages to monitor the hillslope response to a series of artificial rainfall events. We assimilate pressure head, soil moisture, and subsurface outflow with EnKF in a number of assimilation scenarios and discuss the challenges, issues, and tradeoffs arising from the assimilation of multisource data in a real-world test case, with particular focus on the capability of DA to update the subsurface parameters.

  5. Worst case analysis: Earth sensor assembly for the tropical rainfall measuring mission observatory

    NASA Technical Reports Server (NTRS)

    Conley, Michael P.

    1993-01-01

    This worst case analysis verifies that the TRMMESA electronic design is capable of maintaining performance requirements when subjected to worst case circuit conditions. The TRMMESA design is a proven heritage design and capable of withstanding the most worst case and adverse of circuit conditions. Changes made to the baseline DMSP design are relatively minor and do not adversely effect the worst case analysis of the TRMMESA electrical design.

  6. Evaluation of Liquid Fuel Spray Models for Hybrid RANS/LES and DLES Prediction of Turbulent Reactive Flows

    NASA Astrophysics Data System (ADS)

    Afshar, Ali

    An evaluation of Lagrangian-based, discrete-phase models for multi-component liquid sprays encountered in the combustors of gas turbine engines is considered. In particular, the spray modeling capabilities of the commercial software, ANSYS Fluent, was evaluated. Spray modeling was performed for various cold flow validation cases. These validation cases include a liquid jet in a cross-flow, an airblast atomizer, and a high shear fuel nozzle. Droplet properties including velocity and diameter were investigated and compared with previous experimental and numerical results. Different primary and secondary breakup models were evaluated in this thesis. The secondary breakup models investigated include the Taylor analogy breakup (TAB) model, the wave model, the Kelvin-Helmholtz Rayleigh-Taylor model (KHRT), and the Stochastic secondary droplet (SSD) approach. The modeling of fuel sprays requires a proper treatment for the turbulence. Reynolds-averaged Navier-Stokes (RANS), large eddy simulation (LES), hybrid RANS/LES, and dynamic LES (DLES) were also considered for the turbulent flows involving sprays. The spray and turbulence models were evaluated using the available benchmark experimental data.

  7. WIND Validation Cases: Computational Study of Thermally-perfect Gases

    NASA Technical Reports Server (NTRS)

    DalBello, Teryn; Georgiadis, Nick (Technical Monitor)

    2002-01-01

    The ability of the WIND Navier-Stokes code to predict the physics of multi-species gases is investigated in support of future high-speed, high-temperature propulsion applications relevant to NASA's Space Transportation efforts. Three benchmark cases are investigated to evaluate the capability of the WIND chemistry model to accurately predict the aerodynamics of multi-species chemically non-reacting (frozen) gases. Case 1 represents turbulent mixing of sonic hydrogen and supersonic vitiated air. Case 2 consists of heated and unheated round supersonic jet exiting to ambient. Case 3 represents 2-D flow through a converging-diverging Mach 2 nozzle. For Case 1, the WIND results agree fairly well with experimental results and that significant mixing occurs downstream of the hydrogen injection point. For Case 2, the results show that the Wilke and Sutherland viscosity laws gave similar results, and the available SST turbulence model does not predict round supersonic nozzle flows accurately. For Case 3, results show that experimental, frozen, and 1-D gas results agree fairly well, and that frozen, homogeneous, multi-species gas calculations can be approximated by running in perfect gas mode while specifying the mixture gas constant and Ratio of Specific Heats.

  8. Determining Financial Capability of SSI/SSDI Beneficiaries with Psychiatric Disabilities: A Case Series

    PubMed Central

    Lazar, Christina M.; Black, Anne C.; McMahon, Thomas J; O’Shea, Kevin; Rosen, Marc I.

    2015-01-01

    Objective Social Security beneficiaries’ liberty is constrained if they are judged incapable of managing their disability payments and are assigned a fiduciary to manage benefit payments on their behalf. Conversely, beneficiaries’ well-being may be compromised if they misspend money that they need to survive. Several studies have shown that determinations of financial capability are made inconsistently and capability guidelines appear to be applied inconsistently in practice. This case series describes the ambiguities remaining for a small number of individuals even after published criteria for capability— failing to meet basic needs and/or harmful spending on drugs— are applied. Methods Trained, experienced assessors rated the financial capability of 119 individuals in intensive outpatient or inpatient psychiatric facilities who received SSI or SSDI payments. Ten individuals’ cases were determined difficult to judge. Results Six sources of ambiguity were identified by case review: distinguishing incapability from the challenges of navigating poverty, the amount of nonessential spending needed to be considered incapable, the amount of spending on harmful things needed to be considered incapable, how intermittent periods of capability and incapability should be considered, the relative weighting of past behavior and future plans to change, and discrepancies between different sources of information. Conclusion The cases raise fundamental questions about what financial incapability is, but also illustrate how detailed consideration of beneficiaries’ living situations and decision making can inform the difficult dichotomous decision about capability. PMID:25727116

  9. Educating for health service reform: clinical learning, governance and capability - a case study protocol.

    PubMed

    Gardner, Anne; Gardner, Glenn; Coyer, Fiona; Gosby, Helen

    2016-01-01

    The nurse practitioner is a growing clinical role in Australia and internationally, with an expanded scope of practice including prescribing, referring and diagnosing. However, key gaps exist in nurse practitioner education regarding governance of specialty clinical learning and teaching. Specifically, there is no internationally accepted framework against which to measure the quality of clinical learning and teaching for advanced specialty practice. A case study design will be used to investigate educational governance and capability theory in nurse practitioner education. Nurse practitioner students, their clinical mentors and university academic staff, from an Australian university that offers an accredited nurse practitioner Master's degree, will be invited to participate in the study. Semi-structured interviews will be conducted with students and their respective clinical mentors and university academic staff to investigate learning objectives related to educational governance and attributes of capability learning. Limited demographic data on age, gender, specialty, education level and nature of the clinical healthcare learning site will also be collected. Episodes of nurse practitioner student specialty clinical learning will be observed and documentation from the students' healthcare learning sites will be collected. Descriptive statistics will be used to report age groups, areas of specialty and types of facilities where clinical learning and teaching is observed. Qualitative data from interviews, observations and student documents will be coded, aggregated and explored to inform a framework of educational governance, to confirm the existing capability framework and describe any additional characteristics of capability and capability learning. This research has widespread significance and will contribute to ongoing development of the Australian health workforce. Stakeholders from industry and academic bodies will be involved in shaping the framework that guides the quality and governance of clinical learning and teaching in specialty nurse practitioner practice. Through developing standards for advanced clinical learning and teaching, and furthering understanding of capability theory for advanced healthcare practitioners, this research will contribute to evidence-based models of advanced specialty postgraduate education.

  10. Development and case study of a science-based software platform to support policy making on air quality.

    PubMed

    Zhu, Yun; Lao, Yanwen; Jang, Carey; Lin, Chen-Jen; Xing, Jia; Wang, Shuxiao; Fu, Joshua S; Deng, Shuang; Xie, Junping; Long, Shicheng

    2015-01-01

    This article describes the development and implementations of a novel software platform that supports real-time, science-based policy making on air quality through a user-friendly interface. The software, RSM-VAT, uses a response surface modeling (RSM) methodology and serves as a visualization and analysis tool (VAT) for three-dimensional air quality data obtained by atmospheric models. The software features a number of powerful and intuitive data visualization functions for illustrating the complex nonlinear relationship between emission reductions and air quality benefits. The case study of contiguous U.S. demonstrates that the enhanced RSM-VAT is capable of reproducing the air quality model results with Normalized Mean Bias <2% and assisting in air quality policy making in near real time. Copyright © 2014. Published by Elsevier B.V.

  11. An economical method of analyzing transient motion of gas-lubricated rotor-bearing systems.

    NASA Technical Reports Server (NTRS)

    Falkenhagen, G. L.; Ayers, A. L.; Barsalou, L. C.

    1973-01-01

    A method of economically evaluating the hydrodynamic forces generated in a gas-lubricated tilting-pad bearing is presented. The numerical method consists of solving the case of the infinite width bearing and then converting this solution to the case of the finite bearing by accounting for end leakage. The approximate method is compared to the finite-difference solution of Reynolds equation and yields acceptable accuracy while running about one-hundred times faster. A mathematical model of a gas-lubricated tilting-pad vertical rotor systems is developed. The model is capable of analyzing a two-bearing-rotor system in which the rotor center of mass is not at midspan by accounting for gyroscopic moments. The numerical results from the model are compared to actual test data as well as analytical results of other investigators.

  12. A Storm Surge and Inundation Model of the Back River Watershed at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Loftis, Jon Derek; Wang, Harry V.; DeYoung, Russell J.

    2013-01-01

    This report on a Virginia Institute for Marine Science project demonstrates that the sub-grid modeling technology (now as part of Chesapeake Bay Inundation Prediction System, CIPS) can incorporate high-resolution Lidar measurements provided by NASA Langley Research Center into the sub-grid model framework to resolve detailed topographic features for use as a hydrological transport model for run-off simulations within NASA Langley and Langley Air Force Base. The rainfall over land accumulates in the ditches/channels resolved via the model sub-grid was tested to simulate the run-off induced by heavy precipitation. Possessing both the capabilities for storm surge and run-off simulations, the CIPS model was then applied to simulate real storm events starting with Hurricane Isabel in 2003. It will be shown that the model can generate highly accurate on-land inundation maps as demonstrated by excellent comparison of the Langley tidal gauge time series data (CAPABLE.larc.nasa.gov) and spatial patterns of real storm wrack line measurements with the model results simulated during Hurricanes Isabel (2003), Irene (2011), and a 2009 Nor'easter. With confidence built upon the model's performance, sea level rise scenarios from the ICCP (International Climate Change Partnership) were also included in the model scenario runs to simulate future inundation cases.

  13. Multiple-function multi-input/multi-output digital control and on-line analysis

    NASA Technical Reports Server (NTRS)

    Hoadley, Sherwood T.; Wieseman, Carol D.; Mcgraw, Sandra M.

    1992-01-01

    The design and capabilities of two digital controller systems for aeroelastic wind-tunnel models are described. The first allowed control of flutter while performing roll maneuvers with wing load control as well as coordinating the acquisition, storage, and transfer of data for on-line analysis. This system, which employs several digital signal multi-processor (DSP) boards programmed in high-level software languages, is housed in a SUN Workstation environment. A second DCS provides a measure of wind-tunnel safety by functioning as a trip system during testing in the case of high model dynamic response or in case the first DCS fails. The second DCS uses National Instruments LabVIEW Software and Hardware within a Macintosh environment.

  14. Assessment of environments for Mars Science Laboratory entry, descent, and surface operations

    USGS Publications Warehouse

    Vasavada, Ashwin R.; Chen, Allen; Barnes, Jeffrey R.; Burkhart, P. Daniel; Cantor, Bruce A.; Dwyer-Cianciolo, Alicia M.; Fergason, Robini L.; Hinson, David P.; Justh, Hilary L.; Kass, David M.; Lewis, Stephen R.; Mischna, Michael A.; Murphy, James R.; Rafkin, Scot C.R.; Tyler, Daniel; Withers, Paul G.

    2012-01-01

    The Mars Science Laboratory mission aims to land a car-sized rover on Mars' surface and operate it for at least one Mars year in order to assess whether its field area was ever capable of supporting microbial life. Here we describe the approach used to identify, characterize, and assess environmental risks to the landing and rover surface operations. Novel entry, descent, and landing approaches will be used to accurately deliver the 900-kg rover, including the ability to sense and "fly out" deviations from a best-estimate atmospheric state. A joint engineering and science team developed methods to estimate the range of potential atmospheric states at the time of arrival and to quantitatively assess the spacecraft's performance and risk given its particular sensitivities to atmospheric conditions. Numerical models are used to calculate the atmospheric parameters, with observations used to define model cases, tune model parameters, and validate results. This joint program has resulted in a spacecraft capable of accessing, with minimal risk, the four finalist sites chosen for their scientific merit. The capability to operate the landed rover over the latitude range of candidate landing sites, and for all seasons, was verified against an analysis of surface environmental conditions described here. These results, from orbital and model data sets, also drive engineering simulations of the rover's thermal state that are used to plan surface operations.

  15. Probabilistic migration modelling focused on functional barrier efficiency and low migration concepts in support of risk assessment.

    PubMed

    Brandsch, Rainer

    2017-10-01

    Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.

  16. Benchmarking hydrological model predictive capability for UK River flows and flood peaks.

    NASA Astrophysics Data System (ADS)

    Lane, Rosanna; Coxon, Gemma; Freer, Jim; Wagener, Thorsten

    2017-04-01

    Data and hydrological models are now available for national hydrological analyses. However, hydrological model performance varies between catchments, and lumped, conceptual models are not able to produce adequate simulations everywhere. This study aims to benchmark hydrological model performance for catchments across the United Kingdom within an uncertainty analysis framework. We have applied four hydrological models from the FUSE framework to 1128 catchments across the UK. These models are all lumped models and run at a daily timestep, but differ in the model structural architecture and process parameterisations, therefore producing different but equally plausible simulations. We apply FUSE over a 20 year period from 1988-2008, within a GLUE Monte Carlo uncertainty analyses framework. Model performance was evaluated for each catchment, model structure and parameter set using standard performance metrics. These were calculated both for the whole time series and to assess seasonal differences in model performance. The GLUE uncertainty analysis framework was then applied to produce simulated 5th and 95th percentile uncertainty bounds for the daily flow time-series and additionally the annual maximum prediction bounds for each catchment. The results show that the model performance varies significantly in space and time depending on catchment characteristics including climate, geology and human impact. We identify regions where models are systematically failing to produce good results, and present reasons why this could be the case. We also identify regions or catchment characteristics where one model performs better than others, and have explored what structural component or parameterisation enables certain models to produce better simulations in these catchments. Model predictive capability was assessed for each catchment, through looking at the ability of the models to produce discharge prediction bounds which successfully bound the observed discharge. These results improve our understanding of the predictive capability of simple conceptual hydrological models across the UK and help us to identify where further effort is needed to develop modelling approaches to better represent different catchment and climate typologies.

  17. Incorporation of Personal Single Nucleotide Polymorphism (SNP) Data into a National Level Electronic Health Record for Disease Risk Assessment, Part 3: An Evaluation of SNP Incorporated National Health Information System of Turkey for Prostate Cancer

    PubMed Central

    Beyan, Timur

    2014-01-01

    Background A personalized medicine approach provides opportunities for predictive and preventive medicine. Using genomic, clinical, environmental, and behavioral data, the tracking and management of individual wellness is possible. A prolific way to carry this personalized approach into routine practices can be accomplished by integrating clinical interpretations of genomic variations into electronic medical records (EMRs)/electronic health records (EHRs). Today, various central EHR infrastructures have been constituted in many countries of the world, including Turkey. Objective As an initial attempt to develop a sophisticated infrastructure, we have concentrated on incorporating the personal single nucleotide polymorphism (SNP) data into the National Health Information System of Turkey (NHIS-T) for disease risk assessment, and evaluated the performance of various predictive models for prostate cancer cases. We present our work as a three part miniseries: (1) an overview of requirements, (2) the incorporation of SNP data into the NHIS-T, and (3) an evaluation of SNP data incorporated into the NHIS-T for prostate cancer. Methods In the third article of this miniseries, we have evaluated the proposed complementary capabilities (ie, knowledge base and end-user application) with real data. Before the evaluation phase, clinicogenomic associations about increased prostate cancer risk were extracted from knowledge sources, and published predictive genomic models assessing individual prostate cancer risk were collected. To evaluate complementary capabilities, we also gathered personal SNP data of four prostate cancer cases and fifteen controls. Using these data files, we compared various independent and model-based, prostate cancer risk assessment approaches. Results Through the extraction and selection processes of SNP-prostate cancer risk associations, we collected 209 independent associations for increased risk of prostate cancer from the studied knowledge sources. Also, we gathered six cumulative models and two probabilistic models. Cumulative models and assessment of independent associations did not have impressive results. There was one of the probabilistic, model-based interpretation that was successful compared to the others. In envirobehavioral and clinical evaluations, we found that some of the comorbidities, especially, would be useful to evaluate disease risk. Even though we had a very limited dataset, a comparison of performances of different disease models and their implementation with real data as use case scenarios helped us to gain deeper insight into the proposed architecture. Conclusions In order to benefit from genomic variation data, existing EHR/EMR systems must be constructed with the capability of tracking and monitoring all aspects of personal health status (genomic, clinical, environmental, etc) in 24/7 situations, and also with the capability of suggesting evidence-based recommendations. A national-level, accredited knowledge base is a top requirement for improved end-user systems interpreting these parameters. Finally, categorization using similar, individual characteristics (SNP patterns, exposure history, etc) may be an effective way to predict disease risks, but this approach needs to be concretized and supported with new studies. PMID:25600087

  18. NiftySim: A GPU-based nonlinear finite element package for simulation of soft tissue biomechanics.

    PubMed

    Johnsen, Stian F; Taylor, Zeike A; Clarkson, Matthew J; Hipwell, John; Modat, Marc; Eiben, Bjoern; Han, Lianghao; Hu, Yipeng; Mertzanidou, Thomy; Hawkes, David J; Ourselin, Sebastien

    2015-07-01

    NiftySim, an open-source finite element toolkit, has been designed to allow incorporation of high-performance soft tissue simulation capabilities into biomedical applications. The toolkit provides the option of execution on fast graphics processing unit (GPU) hardware, numerous constitutive models and solid-element options, membrane and shell elements, and contact modelling facilities, in a simple to use library. The toolkit is founded on the total Lagrangian explicit dynamics (TLEDs) algorithm, which has been shown to be efficient and accurate for simulation of soft tissues. The base code is written in C[Formula: see text], and GPU execution is achieved using the nVidia CUDA framework. In most cases, interaction with the underlying solvers can be achieved through a single Simulator class, which may be embedded directly in third-party applications such as, surgical guidance systems. Advanced capabilities such as contact modelling and nonlinear constitutive models are also provided, as are more experimental technologies like reduced order modelling. A consistent description of the underlying solution algorithm, its implementation with a focus on GPU execution, and examples of the toolkit's usage in biomedical applications are provided. Efficient mapping of the TLED algorithm to parallel hardware results in very high computational performance, far exceeding that available in commercial packages. The NiftySim toolkit provides high-performance soft tissue simulation capabilities using GPU technology for biomechanical simulation research applications in medical image computing, surgical simulation, and surgical guidance applications.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaohu; Shi, Di; Wang, Zhiwei

    Shunt FACTS devices, such as, a Static Var Compensator (SVC), are capable of providing local reactive power compensation. They are widely used in the network to reduce the real power loss and improve the voltage profile. This paper proposes a planning model based on mixed integer conic programming (MICP) to optimally allocate SVCs in the transmission network considering load uncertainty. The load uncertainties are represented by a number of scenarios. Reformulation and linearization techniques are utilized to transform the original non-convex model into a convex second order cone programming (SOCP) model. Numerical case studies based on the IEEE 30-bus systemmore » demonstrate the effectiveness of the proposed planning model.« less

  20. Generating Phenotypical Erroneous Human Behavior to Evaluate Human-automation Interaction Using Model Checking

    PubMed Central

    Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.

    2012-01-01

    Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel’s zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clausen, Jonathan R.; Brunini, Victor E.; Moffat, Harry K.

    We develop a capability to simulate reduction-oxidation (redox) flow batteries in the Sierra Multi-Mechanics code base. Specifically, we focus on all-vanadium redox flow batteries; however, the capability is general in implementation and could be adopted to other chemistries. The electrochemical and porous flow models follow those developed in the recent publication by [28]. We review the model implemented in this work and its assumptions, and we show several verification cases including a binary electrolyte, and a battery half-cell. Then, we compare our model implementation with the experimental results shown in [28], with good agreement seen. Next, a sensitivity study ismore » conducted for the major model parameters, which is beneficial in targeting specific features of the redox flow cell for improvement. Lastly, we simulate a three-dimensional version of the flow cell to determine the impact of plenum channels on the performance of the cell. Such channels are frequently seen in experimental designs where the current collector plates are borrowed from fuel cell designs. These designs use a serpentine channel etched into a solid collector plate.« less

  2. A 1D-2D Shallow Water Equations solver for discontinuous porosity field based on a Generalized Riemann Problem

    NASA Astrophysics Data System (ADS)

    Ferrari, Alessia; Vacondio, Renato; Dazzi, Susanna; Mignosa, Paolo

    2017-09-01

    A novel augmented Riemann Solver capable of handling porosity discontinuities in 1D and 2D Shallow Water Equation (SWE) models is presented. With the aim of accurately approximating the porosity source term, a Generalized Riemann Problem is derived by adding an additional fictitious equation to the SWEs system and imposing mass and momentum conservation across the porosity discontinuity. The modified Shallow Water Equations are theoretically investigated, and the implementation of an augmented Roe Solver in a 1D Godunov-type finite volume scheme is presented. Robust treatment of transonic flows is ensured by introducing an entropy fix based on the wave pattern of the Generalized Riemann Problem. An Exact Riemann Solver is also derived in order to validate the numerical model. As an extension of the 1D scheme, an analogous 2D numerical model is also derived and validated through test cases with radial symmetry. The capability of the 1D and 2D numerical models to capture different wave patterns is assessed against several Riemann Problems with different wave patterns.

  3. High fidelity studies of exploding foil initiator bridges, Part 3: ALEGRA MHD simulations

    NASA Astrophysics Data System (ADS)

    Neal, William; Garasi, Christopher

    2017-01-01

    Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage, and in the case of EFIs, flyer velocity. Experimental methods have correspondingly generally been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, and predict a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately verified. In this third paper of a three part study, the experimental results presented in part 2 are compared against 3-dimensional MHD simulations. This improved experimental capability, along with advanced simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.

  4. Integration of Engine, Plume, and CFD Analyses in Conceptual Design of Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Li, Wu; Campbell, Richard; Geiselhart, Karl; Shields, Elwood; Nayani, Sudheer; Shenoy, Rajiv

    2009-01-01

    This paper documents an integration of engine, plume, and computational fluid dynamics (CFD) analyses in the conceptual design of low-boom supersonic aircraft, using a variable fidelity approach. In particular, the Numerical Propulsion Simulation System (NPSS) is used for propulsion system cycle analysis and nacelle outer mold line definition, and a low-fidelity plume model is developed for plume shape prediction based on NPSS engine data and nacelle geometry. This model provides a capability for the conceptual design of low-boom supersonic aircraft that accounts for plume effects. Then a newly developed process for automated CFD analysis is presented for CFD-based plume and boom analyses of the conceptual geometry. Five test cases are used to demonstrate the integrated engine, plume, and CFD analysis process based on a variable fidelity approach, as well as the feasibility of the automated CFD plume and boom analysis capability.

  5. Trade Study for Neutron Transport at Low Earth Orbit: Adding Fidelity to DIORAMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClanahan, Tucker Caden; Wakeford, Daniel Tyler

    The Distributed Infrastructure Offering Real-Time Access to Modeling and Analysis (DIORAMA) software provides performance modeling capabilities of the United States Nuclear Detonation Detection System (USNDS) with a focus on the characterization of Space-Based Nuclear Detonation Detection (SNDD) instrument performance [1]. A case study was done to add the neutron propagation capabilities of DIORAMA to low earth orbit (LEO), and compare the back-calculated incident energy from the time-of- ight (TOF) spectrum with the scored incident energy spectrum. As the scoring altitude lowers, the time increase due to scattering takes up much more of the fraction of total TOF; whereas at geosynchronousmore » earth orbit (GEO), the time increase due to scattering is a negligible fraction of the total TOF [2]. The scattering smears out the TOF enough to make the back-calculation of the initial energy spectrum from the TOF spectrum very convoluted.« less

  6. A H∞/μ solution for microvibration mitigation in satellites: A case study

    NASA Astrophysics Data System (ADS)

    Preda, Valentin; Cieslak, Jérôme; Henry, David; Bennani, Samir; Falcoz, Alexandre

    2017-07-01

    The research work presented in this paper focuses on the development of a mixed active-passive microvibration mitigation solution capable of attenuating the transmitted vibrations generated by reaction wheels to a satellite structure. A representative benchmark provided by the European Space Agency (ESA) and Airbus Defence and Space, serves as a support for testing the proposed solution. The paper also covers modeling and design issues as well as a deep analysis of the solution within the H∞ / μ setting. Especially, an uncertainty modeling strategy is proposed to extract a Linear Fractional Transformation (LFT) model. Insight is naturally provided into various dynamical interactions between the plant elements such as bearing and isolator flexibility, gyroscopic effects, actuator dynamics and feedback-loop delays. The design of the mitigation solution is formulated into the H∞ / μ framework leading to a robust H∞ control strategy capable of achieving exemplary active attenuation performance across a wide range of reaction wheel speeds. A systematic analysis procedure based on the structured singular value μ is used to assess and demonstrate the robust stability and robust performance of the microvibration mitigation strategy. The proposed analysis method is also shown to be a powerful and reliable solution to identify worst-case scenarios without relying on traditional Monte Carlo campaigns. Time domain simulations based on a nonlinear high-fidelity industrial simulator are included as a validation step.

  7. An Examination of a Virtual Private Network Implementation to Support a Teleworking Initiative: The Marcus Food Company Inc. Case Study

    ERIC Educational Resources Information Center

    Ferguson, Jason W.

    2010-01-01

    In this dissertation, the author examined the capabilities of virtual private networks (VPNs) in supporting teleworking environments for small businesses in the food marketing sector. The goal of this research was to develop an implementation model for small businesses in the food marketing sector that use a VPN solution to support teleworker…

  8. Numerical investigation of the vortex-induced vibration of an elastically mounted circular cylinder at high Reynolds number (Re = 104) and low mass ratio using the RANS code.

    PubMed

    Khan, Niaz Bahadur; Ibrahim, Zainah; Nguyen, Linh Tuan The; Javed, Muhammad Faisal; Jameel, Mohammed

    2017-01-01

    This study numerically investigates the vortex-induced vibration (VIV) of an elastically mounted rigid cylinder by using Reynolds-averaged Navier-Stokes (RANS) equations with computational fluid dynamic (CFD) tools. CFD analysis is performed for a fixed-cylinder case with Reynolds number (Re) = 104 and for a cylinder that is free to oscillate in the transverse direction and possesses a low mass-damping ratio and Re = 104. Previously, similar studies have been performed with 3-dimensional and comparatively expensive turbulent models. In the current study, the capability and accuracy of the RANS model are validated, and the results of this model are compared with those of detached eddy simulation, direct numerical simulation, and large eddy simulation models. All three response branches and the maximum amplitude are well captured. The 2-dimensional case with the RANS shear-stress transport k-w model, which involves minimal computational cost, is reliable and appropriate for analyzing the characteristics of VIV.

  9. Dynamic analysis of lunar lander during soft landing using explicit finite element method

    NASA Astrophysics Data System (ADS)

    Zheng, Guang; Nie, Hong; Chen, Jinbao; Chen, Chuanzhi; Lee, Heow Pueh

    2018-07-01

    In this paper, the soft-landing analysis of a lunar lander spacecraft under three loading case was carried out in ABAQUS, using the Explicit Finite Element method. To ensure the simulation result's accuracy and reliability, the energy and mass balance criteria of the model was presented along with the theory and calculation method, and the results were benchmarked with other software (LS-DYNA) to get a validated model. The results from three loading case showed that the energy and mass of the models were conserved during soft landing, which satisfies the energy and mass balance criteria. The overloading response, structure steady state, and the crushing stroke of this lunar lander all met the design requirements of the lunar lander. The buffer energy-absorbing properties in this model have a good energy-absorbing capability, in which up to 84% of the initial energy could be dissipated. The design parameters of the model could guide the design of future manned landers or larger lunar landers.

  10. NDARC NASA Design and Analysis of Rotorcraft. Appendix 5; Theory

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2017-01-01

    The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.

  11. NDARC: NASA Design and Analysis of Rotorcraft. Appendix 3; Theory

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2016-01-01

    The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet speci?ed requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft con?gurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates con?guration ?exibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-?delity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy speci?ed design conditions and missions. The analysis tasks can include off-design mission performance calculation, ?ight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft con?gurations is facilitated, while retaining the capability to model novel and advanced concepts. Speci?c rotorcraft con?gurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-?delity attribute models for a component, as well as addition of new components.

  12. NDARC NASA Design and Analysis of Rotorcraft - Input, Appendix 2

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2016-01-01

    The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration exibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tilt-rotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.

  13. NDARC NASA Design and Analysis of Rotorcraft. Appendix 6; Input

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2017-01-01

    The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.

  14. NDARC NASA Design and Analysis of Rotorcraft

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne R.

    2009-01-01

    The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool intended to support both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility; a hierarchy of models; and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with lowfidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single main-rotor and tailrotor helicopter; tandem helicopter; coaxial helicopter; and tiltrotors. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.

  15. NDARC - NASA Design and Analysis of Rotorcraft

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2015-01-01

    The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.

  16. NDARC NASA Design and Analysis of Rotorcraft Theory Appendix 1

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2016-01-01

    The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail-rotor helicopter, tandem helicopter, coaxial helicopter, and tiltrotor. The architecture of the code accommodates addition of new or higher-fidelity attribute models for a component, as well as addition of new components.

  17. The dynamic flexural response of propeller blades. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Djordjevic, S. Z.

    1982-01-01

    The determination of the torsional constants of three blade models having NACA four-digit symmetrical airfoil cross sections is presented. Values were obtained for these models analytically and experimentally. Results were also obtained for three other models having rectangular, elliptical, and parabolic cross sections. Complete modal analyses were performed for five blade models. The identification of modal parameters was done for cases when the blades were modeled as either undamped or damped multi-degree-of-freedom systems. For the experimental phase of this study, the modal testing was performed using a Dual Channel FFT analyzer and an impact hammer (which produced an impulsive excitation). The natural frequency and damping of each mode in the frequency range up to 2 kHz were measured. A small computer code was developed to calculate the dynamic response of the blade models for comparison with the experimental results. A comparison of the undamped and damped cases was made for all five blade models at the instant of maximum excitation force. The program was capable of handling models where the excitation forces were distributed arbitrarily along the length of the blade.

  18. Improvements in Virtual Sensors: Using Spatial Information to Estimate Remote Sensing Spectra

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Srivastava, Ashok N.; Stroeve, Julienne

    2005-01-01

    Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding natural phenomena. Sometimes these instruments are built in a phased approach, with additional measurement capabilities added in later phases. In other cases, technology may mature to the point that the instrument offers new measurement capabilities that were not planned in the original design of the instrument. In still other cases, high resolution spectral measurements may be too costly to perform on a large sample and therefore lower resolution spectral instruments are used to take the majority of measurements. Many applied science questions that are relevant to the earth science remote sensing community require analysis of enormous amounts of data that were generated by instruments with disparate measurement capabilities. In past work [1], we addressed this problem using Virtual Sensors: a method that uses models trained on spectrally rich (high spectral resolution) data to "fill in" unmeasured spectral channels in spectrally poor (low spectral resolution) data. We demonstrated this method by using models trained on the high spectral resolution Terra MODIS instrument to estimate what the equivalent of the MODIS 1.6 micron channel would be for the NOAA AVHRR2 instrument. The scientific motivation for the simulation of the 1.6 micron channel is to improve the ability of the AVHRR2 sensor to detect clouds over snow and ice. This work contains preliminary experiments demonstrating that the use of spatial information can improve our ability to estimate these spectra.

  19. Design and application of a CA-BDI model to determine farmers' land-use behavior.

    PubMed

    Liang, Xiaoying; Chen, Hai; Wang, Yanni; Song, Shixiong

    2016-01-01

    The belief-desire-intention (BDI) model has been widely used to construct reasoning systems for complex tasks in dynamic environments. We have designed a capabilities and abilities (CA)-BDI farmer decision-making model, which is an extension of the BDI architecture and includes internal representations for farmer household Capabilities and Abilities. This model is used to explore farmer learning mechanisms and to simulate the bounded rational decisions made by farmer households. Our case study focuses on the Gaoqu Commune of Mizhi County, Shaanxi Province, China, where scallion is one of the main cash crops. After comparing the differences between actual land-use changes from 2007 to 2009 and the simulation results, we analyze the validity of the model and discuss the potential and limitations of the farmer land-use decision-making model under three scenarios. Based on the design and implementation of the model, the following conclusions can be drawn: (1) the CA-BDI framework is an appropriate model for exploring learning mechanisms and simulating bounded rational decisions; and (2) local governments should encourage scallion planting by assisting scallion farmer cooperatives and farmers to understand the market risk, standardize the rules of their cooperation, and supervise the contracts made between scallion cooperatives and farmers.

  20. Precision Information Environment (PIE) for International Safeguards: Pre-Demonstration Development Use Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gastelum, Zoe N.; Henry, Michael J.

    2013-11-13

    In FY2013, the PIE International Safeguards team demonstrated our development progress to U.S. Department of Energy (DOE) staff from the Office of Nonproliferation and International Security (NA-24, our client) and the Office of Defense Nuclear Nonproliferation Research and Development (NA-22). Following the demonstration, the team was asked by our client to complete additional development prior to a planned demonstration at the International Atomic Energy Agency (IAEA), scheduled tentatively for January or spring of 2014. The team discussed four potential areas for development (in priority order), and will develop them as time and funding permit prior to an IAEA demonstration. Themore » four capability areas are: 1. Addition of equipment manuals to PIE-accessible files 2. Optical character recognition (OCR) of photographed text 3. Barcode reader with information look-up from a database 4. Add Facilities to Data Model 5. Geospatial capabilities with information integration Each area will be described below in a use case.« less

  1. Implementing health information exchange for public health reporting: a comparison of decision and risk management of three regional health information organizations in New York state

    PubMed Central

    Phillips, Andrew B; Wilson, Rosalind V; Kaushal, Rainu; Merrill, Jacqueline A

    2014-01-01

    Health information exchange (HIE) is a significant component of healthcare transformation strategies at both the state and national levels. HIE is expected to improve care coordination, and advance public health, but implementation is massively complex and involves significant risk. In New York, three regional health information organizations (RHIOs) implemented an HIE use case for public health reporting by demonstrating capability to deliver accurate responses to electronic queries via a set of services called the Universal Public Health Node. We investigated process and outcomes of the implementation with a comparative case study. Qualitative analysis was structured around a decision and risk matrix. Although each RHIO had a unique operational model, two common factors influenced risk management and implementation success: leadership capable of agile decision-making and commitment to a strong organizational vision. While all three RHIOs achieved certification for the public health reporting, only one has elected to deploy a production version. PMID:23975626

  2. Implementing health information exchange for public health reporting: a comparison of decision and risk management of three regional health information organizations in New York state.

    PubMed

    Phillips, Andrew B; Wilson, Rosalind V; Kaushal, Rainu; Merrill, Jacqueline A

    2014-02-01

    Health information exchange (HIE) is a significant component of healthcare transformation strategies at both the state and national levels. HIE is expected to improve care coordination, and advance public health, but implementation is massively complex and involves significant risk. In New York, three regional health information organizations (RHIOs) implemented an HIE use case for public health reporting by demonstrating capability to deliver accurate responses to electronic queries via a set of services called the Universal Public Health Node. We investigated process and outcomes of the implementation with a comparative case study. Qualitative analysis was structured around a decision and risk matrix. Although each RHIO had a unique operational model, two common factors influenced risk management and implementation success: leadership capable of agile decision-making and commitment to a strong organizational vision. While all three RHIOs achieved certification for the public health reporting, only one has elected to deploy a production version.

  3. Future Interagency Range and Spaceport Technologies (FIRST) Formulation Products: 1. Transformational Spaceport and Range Concept of Operations. 2. F.I.R.S.T. Business Case Analysis

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The Baseline Report captures range and spaceport capabilities at five sites: KSC, CCAFS, VAFB, Wallops, and Kodiak. The Baseline depicts a future state that relies on existing technology, planned upgrades, and straight-line recapitalization at these sites projected through 2030. The report presents an inventory of current spaceport and range capabilities at these five sites. The baseline is the first part of analyzing a business case for a set of capabilities designed to transform U.S. ground and space launch operations toward a single, integrated national "system" of space transportation systems. The second part of the business case compares current capabilities with technologies needed to support the integrated national "system". The final part, a return on investment analysis, identifies the technologies that best lead to the integrated national system and reduce recurring costs..Numerous data sources were used to define and describe the baseline spaceport and range by identifying major systems and elements and describing capabilities, limitations, and capabilities

  4. Ensemble streamflow assimilation with the National Water Model.

    NASA Astrophysics Data System (ADS)

    Rafieeinasab, A.; McCreight, J. L.; Noh, S.; Seo, D. J.; Gochis, D.

    2017-12-01

    Through case studies of flooding across the US, we compare the performance of the National Water Model (NWM) data assimilation (DA) scheme to that of a newly implemented ensemble Kalman filter approach. The NOAA National Water Model (NWM) is an operational implementation of the community WRF-Hydro modeling system. As of August 2016, the NWM forecasts of distributed hydrologic states and fluxes (including soil moisture, snowpack, ET, and ponded water) over the contiguous United States have been publicly disseminated by the National Center for Environmental Prediction (NCEP) . It also provides streamflow forecasts at more than 2.7 million river reaches up to 30 days in advance. The NWM employs a nudging scheme to assimilate more than 6,000 USGS streamflow observations and provide initial conditions for its forecasts. A problem with nudging is how the forecasts relax quickly to open-loop bias in the forecast. This has been partially addressed by an experimental bias correction approach which was found to have issues with phase errors during flooding events. In this work, we present an ensemble streamflow data assimilation approach combining new channel-only capabilities of the NWM and HydroDART (a coupling of the offline WRF-Hydro model and NCAR's Data Assimilation Research Testbed; DART). Our approach focuses on the single model state of discharge and incorporates error distributions on channel-influxes (overland and groundwater) in the assimilation via an ensemble Kalman filter (EnKF). In order to avoid filter degeneracy associated with a limited number of ensemble at large scale, DART's covariance inflation (Anderson, 2009) and localization capabilities are implemented and evaluated. The current NWM data assimilation scheme is compared to preliminary results from the EnKF application for several flooding case studies across the US.

  5. JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning

    NASA Astrophysics Data System (ADS)

    Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro

    2015-12-01

    We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.

  6. A fuzzy MCDM model with objective and subjective weights for evaluating service quality in hotel industries

    NASA Astrophysics Data System (ADS)

    Zoraghi, Nima; Amiri, Maghsoud; Talebi, Golnaz; Zowghi, Mahdi

    2013-12-01

    This paper presents a fuzzy multi-criteria decision-making (FMCDM) model by integrating both subjective and objective weights for ranking and evaluating the service quality in hotels. The objective method selects weights of criteria through mathematical calculation, while the subjective method uses judgments of decision makers. In this paper, we use a combination of weights obtained by both approaches in evaluating service quality in hotel industries. A real case study that considered ranking five hotels is illustrated. Examples are shown to indicate capabilities of the proposed method.

  7. Implementation of a finite-amplitude method in a relativistic meson-exchange model

    NASA Astrophysics Data System (ADS)

    Sun, Xuwei; Lu, Dinghui

    2017-08-01

    The finite-amplitude method is a feasible numerical approach to large scale random phase approximation calculations. It avoids the storage and calculation of residual interaction elements as well as the diagonalization of the RPA matrix, which will be prohibitive when the configuration space is huge. In this work we finished the implementation of a finite-amplitude method in a relativistic meson exchange mean field model with axial symmetry. The direct variation approach makes our FAM scheme capable of being extended to the multipole excitation case.

  8. Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics.

    PubMed

    Zhang, Liping; Wang, Li; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian

    2017-03-04

    Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1) model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1)), and the Modified Grey Model using Fourier Series (FGM(1,1)), in addition to a multiplicative seasonal ARIMA(1,0,1)(1,1,0)₄ model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1) model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends.

  9. Results of Evaluation of Solar Thermal Propulsion

    NASA Technical Reports Server (NTRS)

    Woodcock, Gordon; Byers, Dave

    2003-01-01

    The solar thermal propulsion evaluation reported here relied on prior research for all information on solar thermal propulsion technology and performance. Sources included personal contacts with experts in the field in addition to published reports and papers. Mission performance models were created based on this information in order to estimate performance and mass characteristics of solar thermal propulsion systems. Mission analysis was performed for a set of reference missions to assess the capabilities and benefits of solar thermal propulsion in comparison with alternative in-space propulsion systems such as chemical and electric propulsion. Mission analysis included estimation of delta V requirements as well as payload capabilities for a range of missions. Launch requirements and costs, and integration into launch vehicles, were also considered. The mission set included representative robotic scientific missions, and potential future NASA human missions beyond low Earth orbit. Commercial communications satellite delivery missions were also included, because if STP technology were selected for that application, frequent use is implied and this would help amortize costs for technology advancement and systems development. A C3 Topper mission was defined, calling for a relatively small STP. The application is to augment the launch energy (C3) available from launch vehicles with their built-in upper stages. Payload masses were obtained from references where available. The communications satellite masses represent the range of payload capabilities for the Delta IV Medium and/or Atlas launch vehicle family. Results indicated that STP could improve payload capability over current systems, but that this advantage cannot be realized except in a few cases because of payload fairing volume limitations on current launch vehicles. It was also found that acquiring a more capable (existing) launch vehicle, rather than adding an STP stage, is the most economical in most cases.

  10. A set-theoretic model reference adaptive control architecture for disturbance rejection and uncertainty suppression with strict performance guarantees

    NASA Astrophysics Data System (ADS)

    Arabi, Ehsan; Gruenwald, Benjamin C.; Yucelen, Tansel; Nguyen, Nhan T.

    2018-05-01

    Research in adaptive control algorithms for safety-critical applications is primarily motivated by the fact that these algorithms have the capability to suppress the effects of adverse conditions resulting from exogenous disturbances, imperfect dynamical system modelling, degraded modes of operation, and changes in system dynamics. Although government and industry agree on the potential of these algorithms in providing safety and reducing vehicle development costs, a major issue is the inability to achieve a-priori, user-defined performance guarantees with adaptive control algorithms. In this paper, a new model reference adaptive control architecture for uncertain dynamical systems is presented to address disturbance rejection and uncertainty suppression. The proposed framework is predicated on a set-theoretic adaptive controller construction using generalised restricted potential functions.The key feature of this framework allows the system error bound between the state of an uncertain dynamical system and the state of a reference model, which captures a desired closed-loop system performance, to be less than a-priori, user-defined worst-case performance bound, and hence, it has the capability to enforce strict performance guarantees. Examples are provided to demonstrate the efficacy of the proposed set-theoretic model reference adaptive control architecture.

  11. Sparse intervertebral fence composition for 3D cervical vertebra segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian

    2018-06-01

    Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.

  12. An approximate theoretical method for modeling the static thrust performance of non-axisymmetric two-dimensional convergent-divergent nozzles. M.S. Thesis - George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Hunter, Craig A.

    1995-01-01

    An analytical/numerical method has been developed to predict the static thrust performance of non-axisymmetric, two-dimensional convergent-divergent exhaust nozzles. Thermodynamic nozzle performance effects due to over- and underexpansion are modeled using one-dimensional compressible flow theory. Boundary layer development and skin friction losses are calculated using an approximate integral momentum method based on the classic karman-Polhausen solution. Angularity effects are included with these two models in a computational Nozzle Performance Analysis Code, NPAC. In four different case studies, results from NPAC are compared to experimental data obtained from subscale nozzle testing to demonstrate the capabilities and limitations of the NPAC method. In several cases, the NPAC prediction matched experimental gross thrust efficiency data to within 0.1 percent at a design NPR, and to within 0.5 percent at off-design conditions.

  13. Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning

    PubMed Central

    Weisswange, Thomas H.; Rothkopf, Constantin A.; Rodemann, Tobias; Triesch, Jochen

    2011-01-01

    Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference. PMID:21750717

  14. Electronic business model for small- and medium-sized manufacturing enterprises (SME): a case study

    NASA Astrophysics Data System (ADS)

    Yuen, Karina; Chung, Walter W.

    2001-10-01

    This paper identifies three essential factors (information infrastructure, executive information system and a new manufacturing paradigm) that are used to support the development of a new business model for competitiveness. They facilitate changes in organization structure in support of business transformation. A SME can source a good manufacturing practice using a model of academic-university collaboration to gain competitive advantage in the e-business world. The collaboration facilitates the change agents to use information systems development as a vehicle to increase the capability of executives in using information and knowledge management to gain higher responsiveness and customer satisfaction. The case company is used to illustrate the application of a web-based executive information system to interface internal communications with external operation. It explains where a good manufacturing practice may be re-applied by other SMEs to acquire skills as a learning organization grows in an extended enterprise setting.

  15. SedFoam-2.0: a 3-D two-phase flow numerical model for sediment transport

    NASA Astrophysics Data System (ADS)

    Chauchat, Julien; Cheng, Zhen; Nagel, Tim; Bonamy, Cyrille; Hsu, Tian-Jian

    2017-11-01

    In this paper, a three-dimensional two-phase flow solver, SedFoam-2.0, is presented for sediment transport applications. The solver is extended from twoPhaseEulerFoam available in the 2.1.0 release of the open-source CFD (computational fluid dynamics) toolbox OpenFOAM. In this approach the sediment phase is modeled as a continuum, and constitutive laws have to be prescribed for the sediment stresses. In the proposed solver, two different intergranular stress models are implemented: the kinetic theory of granular flows and the dense granular flow rheology μ(I). For the fluid stress, laminar or turbulent flow regimes can be simulated and three different turbulence models are available for sediment transport: a simple mixing length model (one-dimensional configuration only), a k - ɛ, and a k - ω model. The numerical implementation is demonstrated on four test cases: sedimentation of suspended particles, laminar bed load, sheet flow, and scour at an apron. These test cases illustrate the capabilities of SedFoam-2.0 to deal with complex turbulent sediment transport problems with different combinations of intergranular stress and turbulence models.

  16. A Novel TRM Calculation Method by Probabilistic Concept

    NASA Astrophysics Data System (ADS)

    Audomvongseree, Kulyos; Yokoyama, Akihiko; Verma, Suresh Chand; Nakachi, Yoshiki

    In a new competitive environment, it becomes possible for the third party to access a transmission facility. From this structure, to efficiently manage the utilization of the transmission network, a new definition about Available Transfer Capability (ATC) has been proposed. According to the North American ElectricReliability Council (NERC)’s definition, ATC depends on several parameters, i. e. Total Transfer Capability (TTC), Transmission Reliability Margin (TRM), and Capacity Benefit Margin (CBM). This paper is focused on the calculation of TRM which is one of the security margin reserved for any uncertainty of system conditions. The TRM calculation by probabilistic method is proposed in this paper. Based on the modeling of load forecast error and error in transmission line limitation, various cases of transmission transfer capability and its related probabilistic nature can be calculated. By consideration of the proposed concept of risk analysis, the appropriate required amount of TRM can be obtained. The objective of this research is to provide realistic information on the actual ability of the network which may be an alternative choice for system operators to make an appropriate decision in the competitive market. The advantages of the proposed method are illustrated by application to the IEEJ-WEST10 model system.

  17. Pre- and post-processing for Cosmic/NASTRAN on personal computers and mainframes

    NASA Technical Reports Server (NTRS)

    Kamel, H. A.; Mobley, A. V.; Nagaraj, B.; Watkins, K. W.

    1986-01-01

    An interface between Cosmic/NASTRAN and GIFTS has recently been released, combining the powerful pre- and post-processing capabilities of GIFTS with Cosmic/NASTRAN's analysis capabilities. The interface operates on a wide range of computers, even linking Cosmic/NASTRAN and GIFTS when the two are on different computers. GIFTS offers a wide range of elements for use in model construction, each translated by the interface into the nearest Cosmic/NASTRAN equivalent; and the options of automatic or interactive modelling and loading in GIFTS make pre-processing easy and effective. The interface itself includes the programs GFTCOS, which creates the Cosmic/NASTRAN input deck (and, if desired, control deck) from the GIFTS Unified Data Base, COSGFT, which translates the displacements from the Cosmic/NASTRAN analysis back into GIFTS; and HOSTR, which handles stress computations for a few higher-order elements available in the interface, but not supported by the GIFTS processor STRESS. Finally, the versatile display options in GIFTS post-processing allow the user to examine the analysis results through an especially wide range of capabilities, including such possibilities as creating composite loading cases, plotting in color and animating the analysis.

  18. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence.

    PubMed

    Korkali, Mert; Veneman, Jason G; Tivnan, Brian F; Bagrow, James P; Hines, Paul D H

    2017-03-20

    Increased interconnection between critical infrastructure networks, such as electric power and communications systems, has important implications for infrastructure reliability and security. Others have shown that increased coupling between networks that are vulnerable to internetwork cascading failures can increase vulnerability. However, the mechanisms of cascading in these models differ from those in real systems and such models disregard new functions enabled by coupling, such as intelligent control during a cascade. This paper compares the robustness of simple topological network models to models that more accurately reflect the dynamics of cascading in a particular case of coupled infrastructures. First, we compare a topological contagion model to a power grid model. Second, we compare a percolation model of internetwork cascading to three models of interdependent power-communication systems. In both comparisons, the more detailed models suggest substantially different conclusions, relative to the simpler topological models. In all but the most extreme case, our model of a "smart" power network coupled to a communication system suggests that increased power-communication coupling decreases vulnerability, in contrast to the percolation model. Together, these results suggest that robustness can be enhanced by interconnecting networks with complementary capabilities if modes of internetwork failure propagation are constrained.

  19. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence

    NASA Astrophysics Data System (ADS)

    Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.; Bagrow, James P.; Hines, Paul D. H.

    2017-03-01

    Increased interconnection between critical infrastructure networks, such as electric power and communications systems, has important implications for infrastructure reliability and security. Others have shown that increased coupling between networks that are vulnerable to internetwork cascading failures can increase vulnerability. However, the mechanisms of cascading in these models differ from those in real systems and such models disregard new functions enabled by coupling, such as intelligent control during a cascade. This paper compares the robustness of simple topological network models to models that more accurately reflect the dynamics of cascading in a particular case of coupled infrastructures. First, we compare a topological contagion model to a power grid model. Second, we compare a percolation model of internetwork cascading to three models of interdependent power-communication systems. In both comparisons, the more detailed models suggest substantially different conclusions, relative to the simpler topological models. In all but the most extreme case, our model of a “smart” power network coupled to a communication system suggests that increased power-communication coupling decreases vulnerability, in contrast to the percolation model. Together, these results suggest that robustness can be enhanced by interconnecting networks with complementary capabilities if modes of internetwork failure propagation are constrained.

  20. Reducing Cascading Failure Risk by Increasing Infrastructure Network Interdependence

    PubMed Central

    Korkali, Mert; Veneman, Jason G.; Tivnan, Brian F.; Bagrow, James P.; Hines, Paul D. H.

    2017-01-01

    Increased interconnection between critical infrastructure networks, such as electric power and communications systems, has important implications for infrastructure reliability and security. Others have shown that increased coupling between networks that are vulnerable to internetwork cascading failures can increase vulnerability. However, the mechanisms of cascading in these models differ from those in real systems and such models disregard new functions enabled by coupling, such as intelligent control during a cascade. This paper compares the robustness of simple topological network models to models that more accurately reflect the dynamics of cascading in a particular case of coupled infrastructures. First, we compare a topological contagion model to a power grid model. Second, we compare a percolation model of internetwork cascading to three models of interdependent power-communication systems. In both comparisons, the more detailed models suggest substantially different conclusions, relative to the simpler topological models. In all but the most extreme case, our model of a “smart” power network coupled to a communication system suggests that increased power-communication coupling decreases vulnerability, in contrast to the percolation model. Together, these results suggest that robustness can be enhanced by interconnecting networks with complementary capabilities if modes of internetwork failure propagation are constrained. PMID:28317835

  1. Simulator for heterogeneous dataflow architectures

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    1993-01-01

    A new simulator is developed to simulate the execution of an algorithm graph in accordance with the Algorithm to Architecture Mapping Model (ATAMM) rules. ATAMM is a Petri Net model which describes the periodic execution of large-grained, data-independent dataflow graphs and which provides predictable steady state time-optimized performance. This simulator extends the ATAMM simulation capability from a heterogenous set of resources, or functional units, to a more general heterogenous architecture. Simulation test cases show that the simulator accurately executes the ATAMM rules for both a heterogenous architecture and a homogenous architecture, which is the special case for only one processor type. The simulator forms one tool in an ATAMM Integrated Environment which contains other tools for graph entry, graph modification for performance optimization, and playback of simulations for analysis.

  2. A Unified Framework for Complex Networks with Degree Trichotomy Based on Markov Chains.

    PubMed

    Hui, David Shui Wing; Chen, Yi-Chao; Zhang, Gong; Wu, Weijie; Chen, Guanrong; Lui, John C S; Li, Yingtao

    2017-06-16

    This paper establishes a Markov chain model as a unified framework for describing the evolution processes in complex networks. The unique feature of the proposed model is its capability in addressing the formation mechanism that can reflect the "trichotomy" observed in degree distributions, based on which closed-form solutions can be derived. Important special cases of the proposed unified framework are those classical models, including Poisson, Exponential, Power-law distributed networks. Both simulation and experimental results demonstrate a good match of the proposed model with real datasets, showing its superiority over the classical models. Implications of the model to various applications including citation analysis, online social networks, and vehicular networks design, are also discussed in the paper.

  3. Physically based modeling of rainfall-triggered landslides: a case study in the Luquillo Forest, Puerto Rico

    NASA Astrophysics Data System (ADS)

    Lepore, C.; Arnone, E.; Noto, L. V.; Sivandran, G.; Bras, R. L.

    2013-01-01

    This paper presents the development of a rainfall-triggered landslide module within a physically based spatially distributed ecohydrologic model. The model, Triangulated Irregular Networks Real-time Integrated Basin Simulator and VEGetation Generator for Interactive Evolution (tRIBS-VEGGIE), is capable of a sophisticated description of many hydrological processes; in particular, the soil moisture dynamics is resolved at a temporal and spatial resolution required to examine the triggering mechanisms of rainfall-induced landslides. The validity of the tRIBS-VEGGIE model to a tropical environment is shown with an evaluation of its performance against direct observations made within the Luquillo Forest (the study area). The newly developed landslide module builds upon the previous version of the tRIBS landslide component. This new module utilizes a numerical solution to the Richards equation to better represent the time evolution of soil moisture transport through the soil column. Moreover, the new landslide module utilizes an extended formulation of the Factor of Safety (FS) to correctly quantify the role of matric suction in slope stability and to account for unsaturated conditions in the evaluation of FS. The new modeling framework couples the capabilities of the detailed hydrologic model to describe soil moisture dynamics with the Infinite Slope model creating a powerful tool for the assessment of landslide risk.

  4. Developing an Integrated Model Framework for the Assessment of Sustainable Agricultural Residue Removal Limits for Bioenergy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Muth, Jr.; Jared Abodeely; Richard Nelson

    Agricultural residues have significant potential as a feedstock for bioenergy production, but removing these residues can have negative impacts on soil health. Models and datasets that can support decisions about sustainable agricultural residue removal are available; however, no tools currently exist capable of simultaneously addressing all environmental factors that can limit availability of residue. The VE-Suite model integration framework has been used to couple a set of environmental process models to support agricultural residue removal decisions. The RUSLE2, WEPS, and Soil Conditioning Index models have been integrated. A disparate set of databases providing the soils, climate, and management practice datamore » required to run these models have also been integrated. The integrated system has been demonstrated for two example cases. First, an assessment using high spatial fidelity crop yield data has been run for a single farm. This analysis shows the significant variance in sustainably accessible residue across a single farm and crop year. A second example is an aggregate assessment of agricultural residues available in the state of Iowa. This implementation of the integrated systems model demonstrates the capability to run a vast range of scenarios required to represent a large geographic region.« less

  5. Implementing a Nuclear Power Plant Model for Evaluating Load-Following Capability on a Small Grid

    NASA Astrophysics Data System (ADS)

    Arda, Samet Egemen

    A pressurized water reactor (PWR) nuclear power plant (NPP) model is introduced into Positive Sequence Load Flow (PSLF) software by General Electric in order to evaluate the load-following capability of NPPs. The nuclear steam supply system (NSSS) consists of a reactor core, hot and cold legs, plenums, and a U-tube steam generator. The physical systems listed above are represented by mathematical models utilizing a state variable lumped parameter approach. A steady-state control program for the reactor, and simple turbine and governor models are also developed. Adequacy of the isolated reactor core, the isolated steam generator, and the complete PWR models are tested in Matlab/Simulink and dynamic responses are compared with the test results obtained from the H. B. Robinson NPP. Test results illustrate that the developed models represents the dynamic features of real-physical systems and are capable of predicting responses due to small perturbations of external reactivity and steam valve opening. Subsequently, the NSSS representation is incorporated into PSLF and coupled with built-in excitation system and generator models. Different simulation cases are run when sudden loss of generation occurs in a small power system which includes hydroelectric and natural gas power plants besides the developed PWR NPP. The conclusion is that the NPP can respond to a disturbance in the power system without exceeding any design and safety limits if appropriate operational conditions, such as achieving the NPP turbine control by adjusting the speed of the steam valve, are met. In other words, the NPP can participate in the control of system frequency and improve the overall power system performance.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, William I.; Vogelmann, Andrew M.; Cheng, Xiaoping

    The Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility began a pilot project in May 2015 to design a routine, high-resolution modeling capability to complement ARM’s extensive suite of measurements. This modeling capability has been named the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) project. The initial focus of LASSO is on shallow convection at the ARM Southern Great Plains (SGP) Climate Research Facility. The availability of LES simulations with concurrent observations will serve many purposes. LES helps bridge the scale gap between DOE ARM observations and models, and the use of routine LES addsmore » value to observations. It provides a self-consistent representation of the atmosphere and a dynamical context for the observations. Further, it elucidates unobservable processes and properties. LASSO will generate a simulation library for researchers that enables statistical approaches beyond a single-case mentality. It will also provide tools necessary for modelers to reproduce the LES and conduct their own sensitivity experiments. Many different uses are envisioned for the combined LASSO LES and observational library. For an observationalist, LASSO can help inform instrument remote sensing retrievals, conduct Observation System Simulation Experiments (OSSEs), and test implications of radar scan strategies or flight paths. For a theoretician, LASSO will help calculate estimates of fluxes and co-variability of values, and test relationships without having to run the model yourself. For a modeler, LASSO will help one know ahead of time which days have good forcing, have co-registered observations at high-resolution scales, and have simulation inputs and corresponding outputs to test parameterizations. Further details on the overall LASSO project are available at https://www.arm.gov/capabilities/modeling/lasso.« less

  7. Initial Multidisciplinary Design and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Ozoroski, L. P.; Geiselhart, K. A.; Padula, S. L.; Li, W.; Olson, E. D.; Campbell, R. L.; Shields, E. W.; Berton, J. J.; Gray, J. S.; Jones, S. M.; hide

    2010-01-01

    Within the Supersonics (SUP) Project of the Fundamental Aeronautics Program (FAP), an initial multidisciplinary design & analysis framework has been developed. A set of low- and intermediate-fidelity discipline design and analysis codes were integrated within a multidisciplinary design and analysis framework and demonstrated on two challenging test cases. The first test case demonstrates an initial capability to design for low boom and performance. The second test case demonstrates rapid assessment of a well-characterized design. The current system has been shown to greatly increase the design and analysis speed and capability, and many future areas for development were identified. This work has established a state-of-the-art capability for immediate use by supersonic concept designers and systems analysts at NASA, while also providing a strong base to build upon for future releases as more multifidelity capabilities are developed and integrated.

  8. Defense Acquisition and the Case of the Joint Capabilities Technology Demonstration Office: Ad Hoc Problem Solving as a Mechanism for Adaptive Change

    DTIC Science & Technology

    2013-04-01

    Capabilities Technology Demonstration Office: Ad Hoc Problem Solving as a Mechanism for Adaptive Change Kathryn Aten and John T. Dillard Naval...Defense Acquisition and the Case of the Joint Capabilities Technology Demonstration Office: Ad Hoc Problem Solving as a Mechanism for Adaptive Change...describes the preliminary analysis and findings of our study exploring what drives successful organizational adaptation in the context of technology

  9. Defense Acquisition and the Case of the Joint Capabilities Technology Demonstration Office: Ad Hoc Problem Solving as a Mechanism for Adaptive Change

    DTIC Science & Technology

    2013-10-01

    pmlkploba=obmloq=pbofbp= Defense Acquisition and the Case of the Joint Capabilities Technology Demonstration Office: Ad Hoc Problem Solving as a...of the Joint Capabilities Technology Demonstration Office: Ad Hoc Problem Solving as a Mechanism for Adaptive Change 5a. CONTRACT NUMBER 5b. GRANT...findings of our study exploring what drives successful organizational adaptation in the context of technology transition and acquisition within the

  10. Healthcare teams over the Internet: programming a certificate-based approach.

    PubMed

    Georgiadis, Christos K; Mavridis, Ioannis K; Pangalos, George I

    2003-07-01

    Healthcare environments are a representative case of collaborative environments since individuals (e.g. doctors) in many cases collaborate in order to provide care to patients in a more proficient way. At the same time modern healthcare institutions are increasingly interested in sharing access of their information resources in the networked environment. Healthcare applications over the Internet offer an attractive communication infrastructure at worldwide level but with a noticeably great factor of risk. Security has, therefore, become a major concern. However, although an adequate level of security can be relied upon digital certificates, if an appropriate security model is used, additional security considerations are needed in order to deal efficiently with the above team-work concerns. The already known Hybrid Access Control (HAC) security model supports and handles efficiently healthcare teams with active security capabilities and is capable to exploit the benefits of certificate technology. In this paper we present the way for encoding the appropriate authoritative information in various types of certificates, as well as the overall operational architecture of the implemented access control system for healthcare collaborative environments over the Internet. A pilot implementation of the proposed methodology in a major Greek hospital has shown the applicability of the proposals and the flexibility of the access control provided.

  11. Maximizing Health or Sufficient Capability in Economic Evaluation? A Methodological Experiment of Treatment for Drug Addiction.

    PubMed

    Goranitis, Ilias; Coast, Joanna; Day, Ed; Copello, Alex; Freemantle, Nick; Frew, Emma

    2017-07-01

    Conventional practice within the United Kingdom and beyond is to conduct economic evaluations with "health" as evaluative space and "health maximization" as the decision-making rule. However, there is increasing recognition that this evaluative framework may not always be appropriate, and this is particularly the case within public health and social care contexts. This article presents a methodological case study designed to explore the impact of changing the evaluative space within an economic evaluation from health to capability well-being and the decision-making rule from health maximization to the maximization of sufficient capability. Capability well-being is an evaluative space grounded on Amartya Sen's capability approach and assesses well-being based on individuals' ability to do and be the things they value in life. Sufficient capability is an egalitarian approach to decision making that aims to ensure everyone in society achieves a normatively sufficient level of capability well-being. The case study is treatment for drug addiction, and the cost-effectiveness of 2 psychological interventions relative to usual care is assessed using data from a pilot trial. Analyses are undertaken from a health care and a government perspective. For the purpose of the study, quality-adjusted life years (measured using the EQ-5D-5L) and years of full capability equivalent and years of sufficient capability equivalent (both measured using the ICECAP-A [ICEpop CAPability measure for Adults]) are estimated. The study concludes that different evaluative spaces and decision-making rules have the potential to offer opposing treatment recommendations. The implications for policy makers are discussed.

  12. Lattice Boltzmann scheme for mixture modeling: analysis of the continuum diffusion regimes recovering Maxwell-Stefan model and incompressible Navier-Stokes equations.

    PubMed

    Asinari, Pietro

    2009-11-01

    A finite difference lattice Boltzmann scheme for homogeneous mixture modeling, which recovers Maxwell-Stefan diffusion model in the continuum limit, without the restriction of the mixture-averaged diffusion approximation, was recently proposed [P. Asinari, Phys. Rev. E 77, 056706 (2008)]. The theoretical basis is the Bhatnagar-Gross-Krook-type kinetic model for gas mixtures [P. Andries, K. Aoki, and B. Perthame, J. Stat. Phys. 106, 993 (2002)]. In the present paper, the recovered macroscopic equations in the continuum limit are systematically investigated by varying the ratio between the characteristic diffusion speed and the characteristic barycentric speed. It comes out that the diffusion speed must be at least one order of magnitude (in terms of Knudsen number) smaller than the barycentric speed, in order to recover the Navier-Stokes equations for mixtures in the incompressible limit. Some further numerical tests are also reported. In particular, (1) the solvent and dilute test cases are considered, because they are limiting cases in which the Maxwell-Stefan model reduces automatically to Fickian cases. Moreover, (2) some tests based on the Stefan diffusion tube are reported for proving the complete capabilities of the proposed scheme in solving Maxwell-Stefan diffusion problems. The proposed scheme agrees well with the expected theoretical results.

  13. Data-driven model for the assessment of Mycobacterium tuberculosis transmission in evolving demographic structures

    PubMed Central

    Arregui, Sergio; Marinova, Dessislava; Sanz, Joaquín

    2018-01-01

    In the case of tuberculosis (TB), the capabilities of epidemic models to produce quantitatively robust forecasts are limited by multiple hindrances. Among these, understanding the complex relationship between disease epidemiology and populations’ age structure has been highlighted as one of the most relevant. TB dynamics depends on age in multiple ways, some of which are traditionally simplified in the literature. That is the case of the heterogeneities in contact intensity among different age strata that are common to all airborne diseases, but still typically neglected in the TB case. Furthermore, while demographic structures of many countries are rapidly aging, demographic dynamics are pervasively ignored when modeling TB spreading. In this work, we present a TB transmission model that incorporates country-specific demographic prospects and empirical contact data around a data-driven description of TB dynamics. Using our model, we find that the inclusion of demographic dynamics is followed by an increase in the burden levels predicted for the next decades in the areas of the world that are most hit by the disease today. Similarly, we show that considering realistic patterns of contacts among individuals in different age strata reshapes the transmission patterns reproduced by the models, a result with potential implications for the design of age-focused epidemiological interventions. PMID:29563223

  14. Integrated surface/subsurface permafrost thermal hydrology: Model formulation and proof-of-concept simulations

    DOE PAGES

    Painter, Scott L.; Coon, Ethan T.; Atchley, Adam L.; ...

    2016-08-11

    The need to understand potential climate impacts and feedbacks in Arctic regions has prompted recent interest in modeling of permafrost dynamics in a warming climate. A new fine-scale integrated surface/subsurface thermal hydrology modeling capability is described and demonstrated in proof-of-concept simulations. The new modeling capability combines a surface energy balance model with recently developed three-dimensional subsurface thermal hydrology models and new models for nonisothermal surface water flows and snow distribution in the microtopography. Surface water flows are modeled using the diffusion wave equation extended to include energy transport and phase change of ponded water. Variation of snow depth in themore » microtopography, physically the result of wind scour, is also modeled heuristically with a diffusion wave equation. The multiple surface and subsurface processes are implemented by leveraging highly parallel community software. Fully integrated thermal hydrology simulations on the tilted open book catchment, an important test case for integrated surface/subsurface flow modeling, are presented. Fine-scale 100-year projections of the integrated permafrost thermal hydrological system on an ice wedge polygon at Barrow Alaska in a warming climate are also presented. Finally, these simulations demonstrate the feasibility of microtopography-resolving, process-rich simulations as a tool to help understand possible future evolution of the carbon-rich Arctic tundra in a warming climate.« less

  15. Cammp Team

    NASA Technical Reports Server (NTRS)

    Evertt, Shonn F.; Collins, Michael; Hahn, William

    2008-01-01

    The International Space Station (ISS) Configuration Analysis Modeling and Mass Properties (CAMMP) Team is presenting a demo of certain CAMMP capabilities at a Booz Allen Hamilton conference in San Antonio. The team will be showing pictures of low fidelity, simplified ISS models, but no dimensions or technical data. The presentation will include a brief description of the contract and task, description and picture of the Topology, description of Generic Ground Rules and Constraints (GGR&C), description of Stage Analysis with constraints applied, and wrap up with description of other tasks such as Special Studies, Cable Routing, etc. The models include conceptual Crew Exploration Vehicle (CEV) and Lunar Lander images and animations created for promotional purposes, which are based entirely on public domain conceptual images from public NASA web sites and publicly available magazine articles and are not based on any actual designs, measurements, or 3D models. Conceptual Mars rover and lander are completely conceptual and are not based on any NASA designs or data. The demonstration includes High Fidelity Computer Aided Design (CAD) models of ISS provided by the ISS 3D CAD Team which will be used in a visual display to demonstrate the capabilities of the Teamcenter Visualization software. The demonstration will include 3D views of the CAD models including random measurements that will be taken to demonstrate the measurement tool. A 3D PDF file will be demonstrated of the Blue Book fidelity assembly complete model with no vehicles attached. The 3D zoom and rotation will be displayed as well as random measurements from the measurement tool. The External Configuration Analysis and Tracking Tool (ExCATT) Microsoft Access Database will be demonstrated to show its capabilities to organize and track hardware on ISS. The data included will be part numbers, serial numbers, historical, current, and future locations, of external hardware components on station. It includes dates of all external ISS events and flights and the associated hardware changes for each event. The hardware location information does not always reveal the exact location of the hardware, only the general location. In some cases the location is a module or carrier, in other cases it is a WIF socket, handrail, or attach point. Only small portions of the data will be displayed for demonstration purposes.

  16. An asymptotic membrane model for wrinkling of very thin films

    NASA Astrophysics Data System (ADS)

    Battista, Antonio; Hamdouni, Aziz; Millet, Olivier

    2018-05-01

    In this work, a formal deduction of a two-dimensional membrane theory, similar to Landau-Lifshitz model, is performed via an asymptotic development of the weak formulation of the three-dimensional equations of elasticity. Some interesting aspects of the deduced model are investigated, in particular the property of obtaining a hyperbolic equation for the out-of-plane displacement under a certain class of boundary conditions and loads. Some simple cases are analyzed to show the relevant aspects of the model and the phenomenology that can be addressed. In particular, it is shown how this mathematical formulation is capable to describe instabilities well known as wrinkling, often observed for the buckling of very thin membranes.

  17. Definition of common support equipment and space station interface requirements for IOC model technology experiments

    NASA Technical Reports Server (NTRS)

    Russell, Richard A.; Waiss, Richard D.

    1988-01-01

    A study was conducted to identify the common support equipment and Space Station interface requirements for the IOC (initial operating capabilities) model technology experiments. In particular, each principal investigator for the proposed model technology experiment was contacted and visited for technical understanding and support for the generation of the detailed technical backup data required for completion of this study. Based on the data generated, a strong case can be made for a dedicated technology experiment command and control work station consisting of a command keyboard, cathode ray tube, data processing and storage, and an alert/annunciator panel located in the pressurized laboratory.

  18. Advanced laser modeling with BLAZE multiphysics

    NASA Astrophysics Data System (ADS)

    Palla, Andrew D.; Carroll, David L.; Gray, Michael I.; Suzuki, Lui

    2017-01-01

    The BLAZE Multiphysics™ software simulation suite was specifically developed to model highly complex multiphysical systems in a computationally efficient and highly scalable manner. These capabilities are of particular use when applied to the complexities associated with high energy laser systems that combine subsonic/transonic/supersonic fluid dynamics, chemically reacting flows, laser electronics, heat transfer, optical physics, and in some cases plasma discharges. In this paper we present detailed cw and pulsed gas laser calculations using the BLAZE model with comparisons to data. Simulations of DPAL, XPAL, ElectricOIL (EOIL), and the optically pumped rare gas laser were found to be in good agreement with experimental data.

  19. Mitochondrial fusion through membrane automata.

    PubMed

    Giannakis, Konstantinos; Andronikos, Theodore

    2015-01-01

    Studies have shown that malfunctions in mitochondrial processes can be blamed for diseases. However, the mechanism behind these operations is yet not sufficiently clear. In this work we present a novel approach to describe a biomolecular model for mitochondrial fusion using notions from the membrane computing. We use a case study defined in BioAmbient calculus and we show how to translate it in terms of a P automata variant. We combine brane calculi with (mem)brane automata to produce a new scheme capable of describing simple, realistic models. We propose the further use of similar methods and the test of other biomolecular models with the same behaviour.

  20. Wires in the soup: quantitative models of cell signaling

    PubMed Central

    Cheong, Raymond; Levchenko, Andre

    2014-01-01

    Living cells are capable of extracting information from their environments and mounting appropriate responses to a variety of associated challenges. The underlying signal transduction networks enabling this can be quite complex, necessitating for their unraveling by sophisticated computational modeling coupled with precise experimentation. Although we are still at the beginning of this process, some recent examples of integrative analysis of cell signaling are very encouraging. This review highlights the case of the NF-κB pathway in order to illustrate how a quantitative model of a signaling pathway can be gradually constructed through continuous experimental validation, and what lessons one might learn from such exercises. PMID:18291655

  1. User Interaction Modeling and Profile Extraction in Interactive Systems: A Groupware Application Case Study †

    PubMed Central

    Tîrnăucă, Cristina; Duque, Rafael; Montaña, José L.

    2017-01-01

    A relevant goal in human–computer interaction is to produce applications that are easy to use and well-adjusted to their users’ needs. To address this problem it is important to know how users interact with the system. This work constitutes a methodological contribution capable of identifying the context of use in which users perform interactions with a groupware application (synchronous or asynchronous) and provides, using machine learning techniques, generative models of how users behave. Additionally, these models are transformed into a text that describes in natural language the main characteristics of the interaction of the users with the system. PMID:28726762

  2. The challenge of emergency response dispersion models on the meso-gamma urban scale: A case study of the July 26, 1993 Oleum tank car spill in Richmond, California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baskett, R.L.; Vogt, P.J.; Schalk, W.W.

    This paper presents a recent case study that illustrates the difficulty of modeling accidental toxic releases in urban area. On the morning of July 26, 1993, oleum was accidentally released from a railroad tank car in Richmond, California. State and local agencies requested real-time modeling from the Atmospheric Release Advisory Capability (ARAC) at Lawrence Livermore National Laboratory (LLNL). Although the ARAC`s with the US Department of Energy is for nuclear materials, the team responded to the accident under an Agreement in Principle with the State of California. ARAC provided model plots describing the location and progress of the toxic cloudmore » to the agencies managing the response. The primary protective action for the public was to shelter in place. Highways, rail lines and public transportation were blocked. The incident was significant, enough that over 24,000 people sought medical attention within the week following the release.« less

  3. Ad Hoc modeling, expert problem solving, and R&T program evaluation

    NASA Technical Reports Server (NTRS)

    Silverman, B. G.; Liebowitz, J.; Moustakis, V. S.

    1983-01-01

    A simplified cost and time (SCAT) analysis program utilizing personal-computer technology is presented and demonstrated in the case of the NASA-Goddard end-to-end data system. The difficulties encountered in implementing complex program-selection and evaluation models in the research and technology field are outlined. The prototype SCAT system described here is designed to allow user-friendly ad hoc modeling in real time and at low cost. A worksheet constructed on the computer screen displays the critical parameters and shows how each is affected when one is altered experimentally. In the NASA case, satellite data-output and control requirements, ground-facility data-handling capabilities, and project priorities are intricately interrelated. Scenario studies of the effects of spacecraft phaseout or new spacecraft on throughput and delay parameters are shown. The use of a network of personal computers for higher-level coordination of decision-making processes is suggested, as a complement or alternative to complex large-scale modeling.

  4. Verification of ARES transport code system with TAKEDA benchmarks

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue

    2015-10-01

    Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.

  5. RETRAN analysis of multiple steam generator blow down caused by an auxiliary feedwater steam-line break

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.

    1987-01-01

    Analysis results for multiple steam generator blow down caused by an auxiliary feedwater steam-line break performed with the RETRAN-02 MOD 003 computer code are presented to demonstrate the capabilities of the RETRAN code to predict system transient response for verifying changes in operational procedures and supporting plant equipment modifications. A typical four-loop Westinghouse pressurized water reactor was modeled using best-estimate versus worst case licensing assumptions. This paper presents analyses performed to evaluate the necessity of implementing an auxiliary feedwater steam-line isolation modification. RETRAN transient analysis can be used to determine core cooling capability response, departure from nucleate boiling ratio (DNBR)more » status, and reactor trip signal actuation times.« less

  6. Geomycology. [fungal biosolubilization and accumulation of metals

    NASA Technical Reports Server (NTRS)

    Puerner, N. J.; Siegel, S. M.

    1976-01-01

    Fungi have long been known to have capabilities for reduction and alkylation of arsenate and selenate but their general capabilities for solubilizing and accumulating metallic substances have been given serious attention only in recent years. Common members of the Aspergillaceae cultured on boron, copper, lead and other metals or oxides can solubilize and concentrate the elements or their compounds. To account for biosolubilization of the metals, we have set up a model study, incubating selected metals, e.g., mercury, in solutions of various metabolites including L-lysine and citric acid. Results of 100-300 days incubation showed that many metals can in fact be readily solubilized, and in some cases more effectively at pH 6-7 than at pH 1.5-2.

  7. Inverse and Predictive Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syracuse, Ellen Marie

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an evenmore » greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.« less

  8. pyBSM: A Python package for modeling imaging systems

    NASA Astrophysics Data System (ADS)

    LeMaster, Daniel A.; Eismann, Michael T.

    2017-05-01

    There are components that are common to all electro-optical and infrared imaging system performance models. The purpose of the Python Based Sensor Model (pyBSM) is to provide open source access to these functions for other researchers to build upon. Specifically, pyBSM implements much of the capability found in the ERIM Image Based Sensor Model (IBSM) V2.0 along with some improvements. The paper also includes two use-case examples. First, performance of an airborne imaging system is modeled using the General Image Quality Equation (GIQE). The results are then decomposed into factors affecting noise and resolution. Second, pyBSM is paired with openCV to evaluate performance of an algorithm used to detect objects in an image.

  9. FY17 Status Report on the Computing Systems for the Yucca Mountain Project TSPA-LA Models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Appel, Gordon John

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014), Hadgu et al. (2015) and Hadgu and Appel (2016). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) weremore » used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5, 11.1 and 12.0 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA- type analysis on the server cluster. The current tasks included preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 12.0 and address DLL-related issues observed in the FY16 work. The model upgrade task successfully converted the Nominal Modeling case to GoldSim Versions 11.1/12. Conversions of the rest of the TSPA models were also attempted but program and operational difficulties precluded this. Upgrade of the remaining of the modeling cases and distributed processing tasks is expected to continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less

  10. Power flows and Mechanical Intensities in structural finite element analysis

    NASA Technical Reports Server (NTRS)

    Hambric, Stephen A.

    1989-01-01

    The identification of power flow paths in dynamically loaded structures is an important, but currently unavailable, capability for the finite element analyst. For this reason, methods for calculating power flows and mechanical intensities in finite element models are developed here. Formulations for calculating input and output powers, power flows, mechanical intensities, and power dissipations for beam, plate, and solid element types are derived. NASTRAN is used to calculate the required velocity, force, and stress results of an analysis, which a post-processor then uses to calculate power flow quantities. The SDRC I-deas Supertab module is used to view the final results. Test models include a simple truss and a beam-stiffened cantilever plate. Both test cases showed reasonable power flow fields over low to medium frequencies, with accurate power balances. Future work will include testing with more complex models, developing an interactive graphics program to view easily and efficiently the analysis results, applying shape optimization methods to the problem with power flow variables as design constraints, and adding the power flow capability to NASTRAN.

  11. Environmental Conditions Associated with Elevated Vibrio parahaemolyticus Concentrations in Great Bay Estuary, New Hampshire.

    PubMed

    Urquhart, Erin A; Jones, Stephen H; Yu, Jong W; Schuster, Brian M; Marcinkiewicz, Ashley L; Whistler, Cheryl A; Cooper, Vaughn S

    2016-01-01

    Reports from state health departments and the Centers for Disease Control and Prevention indicate that the annual number of reported human vibriosis cases in New England has increased in the past decade. Concurrently, there has been a shift in both the spatial distribution and seasonal detection of Vibrio spp. throughout the region based on limited monitoring data. To determine environmental factors that may underlie these emerging conditions, this study focuses on a long-term database of Vibrio parahaemolyticus concentrations in oyster samples generated from data collected from the Great Bay Estuary, New Hampshire over a period of seven consecutive years. Oyster samples from two distinct sites were analyzed for V. parahaemolyticus abundance, noting significant relationships with various biotic and abiotic factors measured during the same period of study. We developed a predictive modeling tool capable of estimating the likelihood of V. parahaemolyticus presence in coastal New Hampshire oysters. Results show that the inclusion of chlorophyll a concentration to an empirical model otherwise employing only temperature and salinity variables, offers improved predictive capability for modeling the likelihood of V. parahaemolyticus in the Great Bay Estuary.

  12. CLUSTERING SOUTH AFRICAN HOUSEHOLDS BASED ON THEIR ASSET STATUS USING LATENT VARIABLE MODELS

    PubMed Central

    McParland, Damien; Gormley, Isobel Claire; McCormick, Tyler H.; Clark, Samuel J.; Kabudula, Chodziwadziwa Whiteson; Collinson, Mark A.

    2014-01-01

    The Agincourt Health and Demographic Surveillance System has since 2001 conducted a biannual household asset survey in order to quantify household socio-economic status (SES) in a rural population living in northeast South Africa. The survey contains binary, ordinal and nominal items. In the absence of income or expenditure data, the SES landscape in the study population is explored and described by clustering the households into homogeneous groups based on their asset status. A model-based approach to clustering the Agincourt households, based on latent variable models, is proposed. In the case of modeling binary or ordinal items, item response theory models are employed. For nominal survey items, a factor analysis model, similar in nature to a multinomial probit model, is used. Both model types have an underlying latent variable structure—this similarity is exploited and the models are combined to produce a hybrid model capable of handling mixed data types. Further, a mixture of the hybrid models is considered to provide clustering capabilities within the context of mixed binary, ordinal and nominal response data. The proposed model is termed a mixture of factor analyzers for mixed data (MFA-MD). The MFA-MD model is applied to the survey data to cluster the Agincourt households into homogeneous groups. The model is estimated within the Bayesian paradigm, using a Markov chain Monte Carlo algorithm. Intuitive groupings result, providing insight to the different socio-economic strata within the Agincourt region. PMID:25485026

  13. Exploring role confusion in nurse case management.

    PubMed

    Gray, Frances C; White, Ann; Brooks-Buck, Judith

    2013-01-01

    This is a report of the results of a pilot project conducted to identify the areas where role confusion/ambiguity exists in the practice of nurse case management. A convenience sample of 25 registered nurses practicing as case managers in a small east coast medical treatment facility's outpatient clinics. Participants responded to 2 Likert-type surveys designed to evaluate role confusion from an individual and a team membership perspective. Analysis indicated that nurse case managers experience role confusion in the specific areas of conflicts between time resources, capabilities, and multiple individual roles. There was no identified role confusion associated with membership on multidisciplinary teams. The application of the Synergy Model as a theoretical framework for nurse case management serves as a benchmark for the implementation of evidence-based practices. This project could serve as the starting point for the development of a skill set for nurse case managers, for the standardization of the practice, and for the recognition of nurse case management as a legitimate nursing subspecialty.

  14. Overview of ASC Capability Computing System Governance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott W.

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  15. Endogenous technological and demographic change under increasing water scarcity

    NASA Astrophysics Data System (ADS)

    Pande, S.; Ertsen, M.; Sivapalan, M.

    2013-12-01

    Many ancient civilizations such as the Indus Valley civilization dispersed under extreme dry conditions. Even contemporary societies such as the one in Murrumbidgee river basin, Australia, have started to witness a decline in overall population under increasing water scarcity. Skeptics of hydroclimatic determinism have often cautioned against the use of hydroclimatic change as the sole predictor of the fate of contemporary societies in water scarce regions by suggesting that technological change may ameliorate the effects of increasing water scarcity. We here develop a simple overlapping generations model of endogenous technological and demographic change. It models technological change not as an exogenous random sequence of events but as an endogenous process (as is widely accepted in contemporary literature) that depends on factors such as the investments that are (endogenously) made in a society, the endogenous diversification of a society into skilled and unskilled workers, individuals' patience in terms of its present consumption versus future consumption, the production technology and the (endogenous) interaction of these factors. The population growth rate is modeled to decline once consumption per capita crosses a ';survival' threshold. The model demonstrates that technological change may ameliorate the effects of increasing water scarcity but only to a certain extent in many cases. It is possible that technological change may allow a society to escape the effect of increasing water society, leading to an exponential rise in technology and population. However, such cases require that the rate of success of investment in technological advancement is high. In other more realistic cases of technological success, we find that endogenous technology change has an effect delaying the peak of population before it starts to decline. While the model is a rather simple model of societal growth, it is capable of replicating (not to scale) patterns of technological change (proxies of which in ancient technology include irrigation canals, metal tools, and the use of horses for labor while in contemporary societies its proxies may be the advent of drip irrigation, increasing reservoir storage capacity etc) and population change. It is capable of replicating the pattern of declining consumption per capita in presence of growth in aggregate production. It is also capable of modeling the exponential population rise even under increasing water scarcity. The results of the model suggest, as one of the many other possible explanations, that ancient societies that declined in the face of extreme water scarcity may have done so due to slower rate of success of investment in technological advancement. The model suggests that the population decline occurs after a prolonged decline in consumption per capita, which in turn is due to the joint effect of initially increasing population and increasing water scarcity. This is despite technological advancement and increase in aggregate production. Thus declining consumption per capita despite technological advancement and increase in aggregate production may serve as a useful predictor of upcoming decline in contemporary societies in water scarce basins.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lechman, Jeremy B.; Battaile, Corbett Chandler.; Bolintineanu, Dan

    This report summarizes a project in which the authors sought to develop and deploy: (i) experimental techniques to elucidate the complex, multiscale nature of thermal transport in particle-based materials; and (ii) modeling approaches to address current challenges in predicting performance variability of materials (e.g., identifying and characterizing physical- chemical processes and their couplings across multiple length and time scales, modeling information transfer between scales, and statically and dynamically resolving material structure and its evolution during manufacturing and device performance). Experimentally, several capabilities were successfully advanced. As discussed in Chapter 2 a flash diffusivity capability for measuring homogeneous thermal conductivity ofmore » pyrotechnic powders (and beyond) was advanced; leading to enhanced characterization of pyrotechnic materials and properties impacting component development. Chapter 4 describes success for the first time, although preliminary, in resolving thermal fields at speeds and spatial scales relevant to energetic components. Chapter 7 summarizes the first ever (as far as the authors know) application of TDTR to actual pyrotechnic materials. This is the first attempt to actually characterize these materials at the interfacial scale. On the modeling side, new capabilities in image processing of experimental microstructures and direct numerical simulation on complicated structures were advanced (see Chapters 3 and 5). In addition, modeling work described in Chapter 8 led to improved prediction of interface thermal conductance from first principles calculations. Toward the second point, for a model system of packed particles, significant headway was made in implementing numerical algorithms and collecting data to justify the approach in terms of highlighting the phenomena at play and pointing the way forward in developing and informing the kind of modeling approach originally envisioned (see Chapter 6). In both cases much more remains to be accomplished.« less

  17. Finite Element Modeling and Analysis of Mars Entry Aeroshell Baseline Concept

    NASA Technical Reports Server (NTRS)

    Ahmed, Samee W.; Lane, Brittney M.

    2017-01-01

    The structure that is developed and analyzed in this project must be able to survive all the various load conditions that it will encounter along its course to Mars with the minimal amount of weight and material. At this stage, the goal is to study the capability of the structure using a finite element model (FEM). This FEM is created using a python script, and is numerically solved in Nastran. The purpose of the model is to achieve an optimization of mass given specific constraints on launch and entry. The generation and analysis of the baseline Rigid Mid-Range Lift to Drag Ratio Aeroshell model is a continuation and an improvement on previous work done for the FEM. The model is generated using Python programming with the axisymmetric placement of nodes for beam and shell elements. The shells are assigned a honeycomb sandwich material with an aluminum honeycomb core and composite face sheets, and the beams are assigned the same material as the shell face sheets. There are two load cases assigned to the model: Earth launch and Mars entry. The Earth launch case consists of pressure, gravity, and vibration loads, and the Mars entry case consists of just pressure and gravity loads. The Earth launch case was determined to be the driving case, though the analyses are performed for both cases to ensure the constraints are satisfied. The types of analysis performed with the model are design optimization, statics, buckling, normal modes, and frequency response, the last of which is only for the Earth launch load case. The final results indicated that all of the requirements are satisfied except the thermal limits, which could not yet be tested, and the normal modes for the Mars entry. However, the frequency limits during Mars entry are expected to be much higher than the lower frequency limits set for the analysis. In addition, there are still improvements that can be made in order to reduce the weight while still meeting all requirements.

  18. Novel selective TOCSY method enables NMR spectral elucidation of metabolomic mixtures

    NASA Astrophysics Data System (ADS)

    MacKinnon, Neil; While, Peter T.; Korvink, Jan G.

    2016-11-01

    Complex mixture analysis is routinely encountered in NMR-based investigations. With the aim of component identification, spectral complexity may be addressed chromatographically or spectroscopically, the latter being favored to reduce sample handling requirements. An attractive experiment is selective total correlation spectroscopy (sel-TOCSY), which is capable of providing tremendous spectral simplification and thereby enhancing assignment capability. Unfortunately, isolating a well resolved resonance is increasingly difficult as the complexity of the mixture increases and the assumption of single spin system excitation is no longer robust. We present TOCSY optimized mixture elucidation (TOOMIXED), a technique capable of performing spectral assignment particularly in the case where the assumption of single spin system excitation is relaxed. Key to the technique is the collection of a series of 1D sel-TOCSY experiments as a function of the isotropic mixing time (τm), resulting in a series of resonance intensities indicative of the underlying molecular structure. By comparing these τm -dependent intensity patterns with a library of pre-determined component spectra, one is able to regain assignment capability. After consideration of the technique's robustness, we tested TOOMIXED firstly on a model mixture. As a benchmark we were able to assign a molecule with high confidence in the case of selectively exciting an isolated resonance. Assignment confidence was not compromised when performing TOOMIXED on a resonance known to contain multiple overlapping signals, and in the worst case the method suggested a follow-up sel-TOCSY experiment to confirm an ambiguous assignment. TOOMIXED was then demonstrated on two realistic samples (whisky and urine), where under our conditions an approximate limit of detection of 0.6 mM was determined. Taking into account literature reports for the sel-TOCSY limit of detection, the technique should reach on the order of 10 μ M sensitivity. We anticipate this technique will be highly attractive to various analytical fields facing mixture analysis, including metabolomics, foodstuff analysis, pharmaceutical analysis, and forensics.

  19. Fourier Transform Fringe-Pattern Analysis of an Absolute Distance Michelson Interferometer for Space-Based Laser Metrology.

    NASA Astrophysics Data System (ADS)

    Talamonti, James Joseph

    1995-01-01

    Future NASA proposals include the placement of optical interferometer systems in space for a wide variety of astrophysical studies including a vastly improved deflection test of general relativity, a precise and direct calibration of the Cepheid distance scale, and the determination of stellar masses (Reasenberg et al., 1988). There are also plans for placing large array telescopes on the moon with the ultimate objective of being able to measure angular separations of less than 10 mu-arc seconds (Burns, 1990). These and other future projects will require interferometric measurement of the (baseline) distance between the optical elements comprising the systems. Eventually, space qualifiable interferometers capable of picometer (10^{-12}m) relative precision and nanometer (10^{ -9}m) absolute precision will be required. A numerical model was developed to emulate the capabilities of systems performing interferometric noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation using Hanning, Blackman, and Gaussian windows in the Fast Fourier Transform Technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer using a frequency scanned laser. By processing computer simulated data through our model, the ultimate precision is projected for ideal data, and data containing AM/FM noise. The precision is shown to be limited by non-linearities in the laser scan. A laboratory system was developed by implementing ultra-stable external cavity diode lasers into existing interferometric measuring techniques. The capabilities of the system were evaluated and increased by using the computer modeling results as guidelines for the data analysis. Experimental results measured 1-3 meter baselines with <20 micron precision. Comparison of the laboratory and modeling results showed that the laboratory precisions obtained were of the same order of magnitude as those predicted for computer generated results under similar conditions. We believe that our model can be implemented as a tool in the design for new metrology systems capable of meeting the precisions required by space-based interferometers.

  20. Validation Test Report For The CRWMS Analysis and Logistics Visually Interactive Model Calvin Version 3.0, 10074-Vtr-3.0-00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Gillespie

    2000-07-27

    This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M&O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequentmore » calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M&O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with the CALVIN V3.0 results within the bounds of the acceptance criteria. Therefore, it is concluded that the CALVIN V3.0 calculation models and options tested in this report are validated.« less

  1. Using the Landlab toolkit to evaluate and compare alternative geomorphic and hydrologic model formulations

    NASA Astrophysics Data System (ADS)

    Tucker, G. E.; Adams, J. M.; Doty, S. G.; Gasparini, N. M.; Hill, M. C.; Hobley, D. E. J.; Hutton, E.; Istanbulluoglu, E.; Nudurupati, S. S.

    2016-12-01

    Developing a better understanding of catchment hydrology and geomorphology ideally involves quantitative hypothesis testing. Often one seeks to identify the simplest mathematical and/or computational model that accounts for the essential dynamics in the system of interest. Development of alternative hypotheses involves testing and comparing alternative formulations, but the process of comparison and evaluation is made challenging by the rigid nature of many computational models, which are often built around a single assumed set of equations. Here we review a software framework for two-dimensional computational modeling that facilitates the creation, testing, and comparison of surface-dynamics models. Landlab is essentially a Python-language software library. Its gridding module allows for easy generation of a structured (raster, hex) or unstructured (Voronoi-Delaunay) mesh, with the capability to attach data arrays to particular types of element. Landlab includes functions that implement common numerical operations, such as gradient calculation and summation of fluxes within grid cells. Landlab also includes a collection of process components, which are encapsulated pieces of software that implement a numerical calculation of a particular process. Examples include downslope flow routing over topography, shallow-water hydrodynamics, stream erosion, and sediment transport on hillslopes. Individual components share a common grid and data arrays, and they can be coupled through the use of a simple Python script. We illustrate Landlab's capabilities with a case study of Holocene landscape development in the northeastern US, in which we seek to identify a collection of model components that can account for the formation of a series of incised canyons that have that developed since the Laurentide ice sheet last retreated. We compare sets of model ingredients related to (1) catchment hydrologic response, (2) hillslope evolution, and (3) stream channel and gully incision. The case-study example demonstrates the value of exploring multiple working hypotheses, in the form of multiple alternative model components.

  2. Improvement of Coordination in the Multi-National Military Coordination Center of the Nepal Army in Respond to Disasters

    DTIC Science & Technology

    2017-06-09

    primary question. This thesis has used the case study research methodology with Capability-Based Assessment (CBA) approach. My engagement in this...protected by more restrictions in their home countries, in which case further publication or sale of copyrighted images is not permissible...effective coordinating mechanism. The research follows the case study method utilizing the Capability Based Analysis (CBA) approach to scrutinize the

  3. An analytical study of reduced-gravity propellant settling

    NASA Technical Reports Server (NTRS)

    Bradshaw, R. D.; Kramer, J. L.; Masica, W. J.

    1974-01-01

    Full-scale propellant reorientation flow dynamics for the Centaur D-1T fuel tank were analyzed. A computer code using the simplified marker and cell technique was modified to include the capability for a variable-grid mesh configuration. Use of smaller cells near the boundary, near baffles, and in corners provides improved flow resolution. Two drop tower model cases were simulated to verify program validity: one case without baffles, the other with baffles and geometry identical to Centaur D-1T. Flow phenomena using the new code successfully modeled drop tower data. Baffles are a positive factor in the settling flow. Two full-scale Centaur D-1T cases were simulated using parameters based on the Titan/Centaur proof flight. These flow simulations indicated the time to clear the vent area and an indication of time to orient and collect the propellant. The results further indicated the complexity of the reorientation flow and the long time period required for settling.

  4. An International Survey of Industrial Applications of Formal Methods. Volume 2. Case Studies

    DTIC Science & Technology

    1993-09-30

    impact of the product on IBM revenues. 4. Error rates were claimed to be below industrial average and errors were minimal to fix. Formal methods, as...critical applications. These include: 3 I I International Survey of Industrial Applications 41 i) "Software failures, particularly under first use, seem...project to add improved modelling capability. I U International Survey of Industrial Applications 93 I Design and Implementation These products are being

  5. [Aging and homeostasis. Biomedical Peculiarities of Semi-supercentenarians.

    PubMed

    Arai, Yasumichi

    Semi-supercentenarians, or people who reach 105 years of age, are regarded as model cases for 'successful ageing'. Semi-supercentenarians maintain capability and cognition for longer than the centenarians who died between 100-104 years of age, together with postponed frailty or age-related diminution of multiple organ reserve. Understanding the biological factors determining extreme longevity and compression of morbidity might help to achieve extended healthy life span for the wider population.

  6. [Persistent Perpetrator Contact in a Patient with Dissociative Identity Disorder].

    PubMed

    Tschöke, Stefan; Eisele, Frank; Steinert, Tilman

    2016-05-01

    The case of a young woman with still ongoing incest and forced prostitution is presented. The criteria for a dissociative identity disorder (DID) were met. Due to persistent contact to the perpetrator she was repeatedly revictimized. Based on the model of trauma-related dissociation we discuss to what extent she was capable of self-determined decision making as well as therapeutic consequences resulting therefrom. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett

    2004-01-01

    Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.

  8. Study of tethered satellite active attitude control

    NASA Technical Reports Server (NTRS)

    Colombo, G.

    1982-01-01

    Existing software was adapted for the study of tethered subsatellite rotational dynamics, an analytic solution for a stable configuration of a tethered subsatellite was developed, the analytic and numerical integrator (computer) solutions for this "test case' was compared in a two mass tether model program (DUMBEL), the existing multiple mass tether model (SKYHOOK) was modified to include subsatellite rotational dynamics, the analytic "test case,' was verified, and the use of the SKYHOOK rotational dynamics capability with a computer run showing the effect of a single off axis thruster on the behavior of the subsatellite was demonstrated. Subroutines for specific attitude control systems are developed and applied to the study of the behavior of the tethered subsatellite under realistic on orbit conditions. The effect of all tether "inputs,' including pendular oscillations, air drag, and electrodynamic interactions, on the dynamic behavior of the tether are included.

  9. Influence diagnostics in meta-regression model.

    PubMed

    Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua

    2017-09-01

    This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.

  10. High fidelity studies of exploding foil initiator bridges, Part 1: Experimental method

    NASA Astrophysics Data System (ADS)

    Bowden, Mike; Neal, William

    2017-01-01

    Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage and in the case of EFIs, flyer velocity. Correspondingly, experimental methods have in general been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, predicting a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately validated. In this first paper of a three part study, the experimental method for determining the current, voltage, flyer velocity and multi-dimensional profile of detonator components is presented. This improved capability, along with high fidelity simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.

  11. Ambiguity in determining financial capability of SSI and SSDI beneficiaries with psychiatric disabilities.

    PubMed

    Lazar, Christina M; Black, Anne C; McMahon, Thomas J; O'Shea, Kevin; Rosen, Marc I

    2015-03-01

    The liberty of individuals who receive Social Security disability payments is constrained if they are judged incapable of managing their payments and are assigned a payee or conservator to manage benefit payments on their behalf. Conversely, beneficiaries' well-being may be compromised if they misspend money that they need to survive. Several studies have shown that determinations of financial capability are made inconsistently and that capability guidelines appear to be applied inconsistently. This article describes ambiguities that remained for individuals even after a comprehensive assessment of financial capability was conducted by independent assessors. Trained, experienced assessors rated the financial capability of 118 individuals in intensive outpatient or inpatient psychiatric facilities who received Social Security Disability Insurance or Supplemental Security Income. Ten individuals' cases were determined to be difficult to judge. Six sources of ambiguity were identified by case review: distinguishing incapability from the challenges of navigating poverty, the amount of nonessential spending that indicates incapability, the amount of spending on harmful things that indicates incapability, how to consider intermittent periods of capability and incapability, the relative weighting of past behavior and future plans to change, and discrepancies between different sources of information. The cases raise fundamental questions about how to define and identify financial incapability, but they also illustrate how detailed consideration of beneficiaries' living situations and decision making can inform the difficult dichotomous decision about capability.

  12. Critical operations capabilities in a high cost environment: a multiple case study

    NASA Astrophysics Data System (ADS)

    Sansone, C.; Hilletofth, P.; Eriksson, D.

    2018-04-01

    Operations capabilities have been a popular research area for many years and several frameworks have been proposed in the literature. The current frameworks do not take specific contexts into consideration, for instance a high cost environment. This research gap is of particular interest since a manufacturing relocation process has been ongoing the last decades, leading to a huge amount of manufacturing being moved from high to low cost environments. The purpose of this study is to identify critical operations capabilities in a high cost environment. The two research questions were: What are the critical operations capabilities dimensions in a high cost environment? What are the critical operations capabilities in a high cost environment? A multiple case study was conducted and three Swedish manufacturing firms were selected. The study was based on the investigation of an existing framework of operations capabilities. The main dimensions of operations capabilities included in the framework were: cost, quality, delivery, flexibility, service, innovation and environment. Each of the dimensions included two or more operations capabilities. The findings confirmed the validity of the framework and its usefulness in a high cost environment and a new operations capability was revealed (employee flexibility).

  13. Investigation of Advanced Counterrotation Blade Configuration Concepts for High Speed Turboprop Systems. Task 8: Cooling Flow/heat Transfer Analysis

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Topp, David A.; Heidegger, Nathan J.; Delaney, Robert A.

    1994-01-01

    The focus of this task was to validate the ADPAC code for heat transfer calculations. To accomplish this goal, the ADPAC code was modified to allow for a Cartesian coordinate system capability and to add boundary conditions to handle spanwise periodicity and transpiration boundaries. The primary validation case was the film cooled C3X vane. The cooling hole modeling included both a porous region and grid in each discrete hold. Predictions for these models as well as smooth wall compared well with the experimental data.

  14. Compact divided-pupil line-scanning confocal microscope for investigation of human tissues

    NASA Astrophysics Data System (ADS)

    Glazowski, Christopher; Peterson, Gary; Rajadhyaksha, Milind

    2013-03-01

    Divided-pupil line-scanning confocal microscopy (DPLSCM) can provide a simple and low-cost approach for imaging of human tissues with pathology-like nuclear and cellular detail. Using results from a multidimensional numerical model of DPLSCM, we found optimal pupil configurations for improved axial sectioning, as well as control of speckle noise in the case of reflectance imaging. The modeling results guided the design and construction of a simple (10 component) microscope, packaged within the footprint of an iPhone, and capable of cellular resolution. We present the optical design with experimental video-images of in-vivo human tissues.

  15. An Astrosociological Perspective on Space-Capable vs. Spacefaring Societies

    NASA Astrophysics Data System (ADS)

    Pass, J.

    As with any academic field, astrosociology allows for an endless number of competing theoretical models and hypotheses. One possible theoretical model is presented here that starts with the premise that even the most advanced societies today are extremely far from achieving a spacefaring status. The most advanced nation states are, in fact, space-capable societies because they have the capacity to send cargo and humans into low Earth orbit and beyond. However, their social structures and cultures lack fundamental characteristics that would allow for their designation as spacefaring societies. This article describes the characteristics of a theoretical spacefaring society and argues that getting there from our current status as space-capable societies is a long and arduous process, and it is not a definite outcome whatsoever. While a continuum is offered, it represents an imprecise path that can retrograde or fall apart at any time. Thus, this theoretical model provides one possible series of an unfolding of events that result in the creation of characteristics of the social fabric that may result in movement along the continuum toward a spacefaring society. Movement along the continuum results in an accumulation of coordinated spacefaring characteristics for a given society. Simultaneously, strictly terrestrial characteristics disappear or transform themselves into hybrid forms that include spacefaring features. This exercise demonstrates that this theoretical exercise has a number of benefits for astrosociologists conducting research in the area of spacefaring theory. Moreover, it makes the case for the idea that the study of the theoretical transformation from a space-capable to a spacefaring society includes implications for current and future 1) space policy in the public sector and 2) corporate decision-making related to space in the private sector.

  16. Characterizing Wheel-Soil Interaction Loads Using Meshfree Finite Element Methods: A Sensitivity Analysis for Design Trade Studies

    NASA Technical Reports Server (NTRS)

    Contreras, Michael T.; Trease, Brian P.; Bojanowski, Cezary; Kulakx, Ronald F.

    2013-01-01

    A wheel experiencing sinkage and slippage events poses a high risk to planetary rover missions as evidenced by the mobility challenges endured by the Mars Exploration Rover (MER) project. Current wheel design practice utilizes loads derived from a series of events in the life cycle of the rover which do not include (1) failure metrics related to wheel sinkage and slippage and (2) performance trade-offs based on grouser placement/orientation. Wheel designs are rigorously tested experimentally through a variety of drive scenarios and simulated soil environments; however, a robust simulation capability is still in development due to myriad of complex interaction phenomena that contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree nite element approaches enable simulations that capture su cient detail of wheel-soil interaction while remaining computationally feasible. This study implements the JPL wheel-soil benchmark problem in the commercial code environment utilizing the large deformation modeling capability of Smooth Particle Hydrodynamics (SPH) meshfree methods. The nominal, benchmark wheel-soil interaction model that produces numerically stable and physically realistic results is presented and simulations are shown for both wheel traverse and wheel sinkage cases. A sensitivity analysis developing the capability and framework for future ight applications is conducted to illustrate the importance of perturbations to critical material properties and parameters. Implementation of the proposed soil-wheel interaction simulation capability and associated sensitivity framework has the potential to reduce experimentation cost and improve the early stage wheel design proce

  17. Investigation of dynamic characteristics of a rotor system with surface coatings

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Cao, Dengqing; Wang, Deyou

    2017-02-01

    A Jeffcott rotor system with surface coatings capable of describing the mechanical vibration resulting from unbalance and rub-impact is formulated in this article. A contact force model proposed recently to describe the impact force between the disc and casing with coatings is employed to do the dynamic analysis for the rotor system with rubbing fault. Due to the variation of penetration, the contact force model is correspondingly modified. Meanwhile, the Coulomb friction model is applied to simulate the friction characteristics. Then, the case study of rub-impact with surface coatings is simulated by the Runge-Kutta method, in which a linear interpolation method is adopted to predict the rubbing instant. Moreover, the dynamic characteristics of the rotor system with surface coatings are analyzed in terms of bifurcation plot, waveform, whirl orbit, Poincaré map and spectrum plot. And the effects of the hardness of surface coatings on the response are investigated as well. Finally, compared with the classical models, the modified contact force model is shown to be more suitable to solve the rub-impact of aero-engine with surface coatings.

  18. Numerical investigation of the vortex-induced vibration of an elastically mounted circular cylinder at high Reynolds number (Re = 104) and low mass ratio using the RANS code

    PubMed Central

    2017-01-01

    This study numerically investigates the vortex-induced vibration (VIV) of an elastically mounted rigid cylinder by using Reynolds-averaged Navier–Stokes (RANS) equations with computational fluid dynamic (CFD) tools. CFD analysis is performed for a fixed-cylinder case with Reynolds number (Re) = 104 and for a cylinder that is free to oscillate in the transverse direction and possesses a low mass-damping ratio and Re = 104. Previously, similar studies have been performed with 3-dimensional and comparatively expensive turbulent models. In the current study, the capability and accuracy of the RANS model are validated, and the results of this model are compared with those of detached eddy simulation, direct numerical simulation, and large eddy simulation models. All three response branches and the maximum amplitude are well captured. The 2-dimensional case with the RANS shear–stress transport k-w model, which involves minimal computational cost, is reliable and appropriate for analyzing the characteristics of VIV. PMID:28982172

  19. A wetting and drying scheme for ROMS

    USGS Publications Warehouse

    Warner, John C.; Defne, Zafer; Haas, Kevin; Arango, Hernan G.

    2013-01-01

    The processes of wetting and drying have many important physical and biological impacts on shallow water systems. Inundation and dewatering effects on coastal mud flats and beaches occur on various time scales ranging from storm surge, periodic rise and fall of the tide, to infragravity wave motions. To correctly simulate these physical processes with a numerical model requires the capability of the computational cells to become inundated and dewatered. In this paper, we describe a method for wetting and drying based on an approach consistent with a cell-face blocking algorithm. The method allows water to always flow into any cell, but prevents outflow from a cell when the total depth in that cell is less than a user defined critical value. We describe the method, the implementation into the three-dimensional Regional Oceanographic Modeling System (ROMS), and exhibit the new capability under three scenarios: an analytical expression for shallow water flows, a dam break test case, and a realistic application to part of a wetland area along the Georgia Coast, USA.

  20. A robust active control system for shimmy damping in the presence of free play and uncertainties

    NASA Astrophysics Data System (ADS)

    Orlando, Calogero; Alaimo, Andrea

    2017-02-01

    Shimmy vibration is the oscillatory motion of the fork-wheel assembly about the steering axis. It represents one of the major problem of aircraft landing gear because it can lead to excessive wear, discomfort as well as safety concerns. Based on the nonlinear model of the mechanics of a single wheel nose landing gear (NLG), electromechanical actuator and tire elasticity, a robust active controller capable of damping shimmy vibration is designed and investigated in this study. A novel Decline Population Swarm Optimization (PDSO) procedure is introduced and used to select the optimal parameters for the controller. The PDSO procedure is based on a decline demographic model and shows high global search capability with reduced computational costs. The open and closed loop system behavior is analyzed under different case studies of aeronautical interest and the effects of torsional free play on the nose landing gear response are also studied. Plant parameters probabilistic uncertainties are then taken into account to assess the active controller robustness using a stochastic approach.

  1. MathWorks Simulink and C++ integration with the new VLT PLC-based standard development platform for instrument control systems

    NASA Astrophysics Data System (ADS)

    Kiekebusch, Mario J.; Di Lieto, Nicola; Sandrock, Stefan; Popovic, Dan; Chiozzi, Gianluca

    2014-07-01

    ESO is in the process of implementing a new development platform, based on PLCs, for upcoming VLT control systems (new instruments and refurbishing of existing systems to manage obsolescence issues). In this context, we have evaluated the integration and reuse of existing C++ libraries and Simulink models into the real-time environment of BECKHOFF Embedded PCs using the capabilities of the latest version of TwinCAT software and MathWorks Embedded Coder. While doing so the aim was to minimize the impact of the new platform by adopting fully tested solutions implemented in C++. This allows us to reuse the in house expertise, as well as extending the normal capabilities of the traditional PLC programming environments. We present the progress of this work and its application in two concrete cases: 1) field rotation compensation for instrument tracking devices like derotators, 2) the ESO standard axis controller (ESTAC), a generic model-based controller implemented in Simulink and used for the control of telescope main axes.

  2. Time Prediction Models for Echinococcosis Based on Gray System Theory and Epidemic Dynamics

    PubMed Central

    Zhang, Liping; Wang, Li; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian

    2017-01-01

    Echinococcosis, which can seriously harm human health and animal husbandry production, has become an endemic in the Xinjiang Uygur Autonomous Region of China. In order to explore an effective human Echinococcosis forecasting model in Xinjiang, three grey models, namely, the traditional grey GM(1,1) model, the Grey-Periodic Extensional Combinatorial Model (PECGM(1,1)), and the Modified Grey Model using Fourier Series (FGM(1,1)), in addition to a multiplicative seasonal ARIMA(1,0,1)(1,1,0)4 model, are applied in this study for short-term predictions. The accuracy of the different grey models is also investigated. The simulation results show that the FGM(1,1) model has a higher performance ability, not only for model fitting, but also for forecasting. Furthermore, considering the stability and the modeling precision in the long run, a dynamic epidemic prediction model based on the transmission mechanism of Echinococcosis is also established for long-term predictions. Results demonstrate that the dynamic epidemic prediction model is capable of identifying the future tendency. The number of human Echinococcosis cases will increase steadily over the next 25 years, reaching a peak of about 1250 cases, before eventually witnessing a slow decline, until it finally ends. PMID:28273856

  3. Tellurium notebooks-An environment for reproducible dynamical modeling in systems biology.

    PubMed

    Medley, J Kyle; Choi, Kiri; König, Matthias; Smith, Lucian; Gu, Stanley; Hellerstein, Joseph; Sealfon, Stuart C; Sauro, Herbert M

    2018-06-01

    The considerable difficulty encountered in reproducing the results of published dynamical models limits validation, exploration and reuse of this increasingly large biomedical research resource. To address this problem, we have developed Tellurium Notebook, a software system for model authoring, simulation, and teaching that facilitates building reproducible dynamical models and reusing models by 1) providing a notebook environment which allows models, Python code, and narrative to be intermixed, 2) supporting the COMBINE archive format during model development for capturing model information in an exchangeable format and 3) enabling users to easily simulate and edit public COMBINE-compliant models from public repositories to facilitate studying model dynamics, variants and test cases. Tellurium Notebook, a Python-based Jupyter-like environment, is designed to seamlessly inter-operate with these community standards by automating conversion between COMBINE standards formulations and corresponding in-line, human-readable representations. Thus, Tellurium brings to systems biology the strategy used by other literate notebook systems such as Mathematica. These capabilities allow users to edit every aspect of the standards-compliant models and simulations, run the simulations in-line, and re-export to standard formats. We provide several use cases illustrating the advantages of our approach and how it allows development and reuse of models without requiring technical knowledge of standards. Adoption of Tellurium should accelerate model development, reproducibility and reuse.

  4. The study of knowledge management capability and organizational effectiveness in Taiwanese public utility: the mediator role of organizational commitment.

    PubMed

    Chiu, Chia-Nan; Chen, Huei-Huang

    2016-01-01

    Many studies on the significance of knowledge management (KM) in the business world have been performed in recent years. Public sector KM is a research area of growing importance. Findings show that few authors specialize in the field and there are several obstacles to developing a cohesive body of literature. In order to examine their effect of the knowledge management capability [which consists of knowledge infrastructure capability (KIC) and knowledge process capability (KPC)] and organizational effectiveness (OE), this study conducted structural equation modeling to test the hypotheses with 302 questionnaires of Taipei Water Department staffs in Taiwan. In exploring the model developed in this study, the findings show that there exists a significant relationship between KPC and OE, while KIC and OE are insignificant. These results are different from earlier findings in the literature. Furthermore, this research proposed organizational commitment (OC) as the mediator role. The findings suggest that only OC has significant mediating effects between KPC and OE, whereas this is not the case for KIC and OE. It is noteworthy that the above findings inspired managers, in addition to construct the knowledge infrastructure more than focus on social media tools on the Internet, which engage knowledge workers in "peer-to-peer" knowledge sharing across organizational and company boundaries. The results are likely to help organizations (particularly public utilities) sharpen their knowledge management strategies. Academic and practical implications were drawn based on the findings.

  5. Into the development of a model to assess beam shaping and polarization control effects on laser cutting

    NASA Astrophysics Data System (ADS)

    Rodrigues, Gonçalo C.; Duflou, Joost R.

    2018-02-01

    This paper offers an in-depth look into beam shaping and polarization control as two of the most promising techniques for improving industrial laser cutting of metal sheets. An assessment model is developed for the study of such effects. It is built upon several modifications to models as available in literature in order to evaluate the potential of a wide range of considered concepts. This includes different kinds of beam shaping (achieved by extra-cavity optical elements or asymmetric diode staking) and polarization control techniques (linear, cross, radial, azimuthal). A fully mathematical description and solution procedure are provided. Three case studies for direct diode lasers follow, containing both experimental data and parametric studies. In the first case study, linear polarization is analyzed for any given angle between the cutting direction and the electrical field. In the second case several polarization strategies are compared for similar cut conditions, evaluating, for example, the minimum number of spatial divisions of a segmented polarized laser beam to achieve a target performance. A novel strategy, based on a 12-division linear-to-radial polarization converter with an axis misalignment and capable of improving cutting efficiency with more than 60%, is proposed. The last case study reveals different insights in beam shaping techniques, with an example of a beam shape optimization path for a 30% improvement in cutting efficiency. The proposed techniques are not limited to this type of laser source, neither is the model dedicated to these specific case studies. Limitations of the model and opportunities are further discussed.

  6. A network control theory approach to modeling and optimal control of zoonoses: case study of brucellosis transmission in sub-Saharan Africa.

    PubMed

    Roy, Sandip; McElwain, Terry F; Wan, Yan

    2011-10-01

    Developing control policies for zoonotic diseases is challenging, both because of the complex spread dynamics exhibited by these diseases, and because of the need for implementing complex multi-species surveillance and control efforts using limited resources. Mathematical models, and in particular network models, of disease spread are promising as tools for control-policy design, because they can provide comprehensive quantitative representations of disease transmission. A layered dynamical network model for the transmission and control of zoonotic diseases is introduced as a tool for analyzing disease spread and designing cost-effective surveillance and control. The model development is achieved using brucellosis transmission among wildlife, cattle herds, and human sub-populations in an agricultural system as a case study. Precisely, a model that tracks infection counts in interacting animal herds of multiple species (e.g., cattle herds and groups of wildlife for brucellosis) and in human subpopulations is introduced. The model is then abstracted to a form that permits comprehensive targeted design of multiple control capabilities as well as model identification from data. Next, techniques are developed for such quantitative design of control policies (that are directed to both the animal and human populations), and for model identification from snapshot and time-course data, by drawing on recent results in the network control community. The modeling approach is shown to provide quantitative insight into comprehensive control policies for zoonotic diseases, and in turn to permit policy design for mitigation of these diseases. For the brucellosis-transmission example in particular, numerous insights are obtained regarding the optimal distribution of resources among available control capabilities (e.g., vaccination, surveillance and culling, pasteurization of milk) and points in the spread network (e.g., transhumance vs. sedentary herds). In addition, a preliminary identification of the network model for brucellosis is achieved using historical data, and the robustness of the obtained model is demonstrated. As a whole, our results indicate that network modeling can aid in designing control policies for zoonotic diseases.

  7. A Network Control Theory Approach to Modeling and Optimal Control of Zoonoses: Case Study of Brucellosis Transmission in Sub-Saharan Africa

    PubMed Central

    Roy, Sandip; McElwain, Terry F.; Wan, Yan

    2011-01-01

    Background Developing control policies for zoonotic diseases is challenging, both because of the complex spread dynamics exhibited by these diseases, and because of the need for implementing complex multi-species surveillance and control efforts using limited resources. Mathematical models, and in particular network models, of disease spread are promising as tools for control-policy design, because they can provide comprehensive quantitative representations of disease transmission. Methodology/Principal Findings A layered dynamical network model for the transmission and control of zoonotic diseases is introduced as a tool for analyzing disease spread and designing cost-effective surveillance and control. The model development is achieved using brucellosis transmission among wildlife, cattle herds, and human sub-populations in an agricultural system as a case study. Precisely, a model that tracks infection counts in interacting animal herds of multiple species (e.g., cattle herds and groups of wildlife for brucellosis) and in human subpopulations is introduced. The model is then abstracted to a form that permits comprehensive targeted design of multiple control capabilities as well as model identification from data. Next, techniques are developed for such quantitative design of control policies (that are directed to both the animal and human populations), and for model identification from snapshot and time-course data, by drawing on recent results in the network control community. Conclusions/Significance The modeling approach is shown to provide quantitative insight into comprehensive control policies for zoonotic diseases, and in turn to permit policy design for mitigation of these diseases. For the brucellosis-transmission example in particular, numerous insights are obtained regarding the optimal distribution of resources among available control capabilities (e.g., vaccination, surveillance and culling, pasteurization of milk) and points in the spread network (e.g., transhumance vs. sedentary herds). In addition, a preliminary identification of the network model for brucellosis is achieved using historical data, and the robustness of the obtained model is demonstrated. As a whole, our results indicate that network modeling can aid in designing control policies for zoonotic diseases. PMID:22022621

  8. Hypersonic Navier Stokes Comparisons to Orbiter Flight Data

    NASA Technical Reports Server (NTRS)

    Campbell, Charles H.; Nompelis, Ioannis; Candler, Graham; Barnhart, Michael; Yoon, Seokkwan

    2009-01-01

    Hypersonic chemical nonequilibrium simulations of low earth orbit entry flow fields are becoming increasingly commonplace as software and computational capabilities become more capable. However, development of robust and accurate software to model these environments will always encounter a significant barrier in developing a suite of high quality calibration cases. The US3D hypersonic nonequilibrium Navier Stokes analysis capability has been favorably compared to a number of wind tunnel test cases. Extension of the calibration basis for this software to Orbiter flight conditions will provide an incremental increase in confidence. As part of the Orbiter Boundary Layer Transition Flight Experiment and the Hypersonic Thermodynamic Infrared Measurements project, NASA is performing entry flight testing on the Orbiter to provide valuable aerothermodynamic heating data. An increase in interest related to orbiter entry environments is resulting from this activity. With the advent of this new data, comparisons of the US3D software to the new flight testing data is warranted. This paper will provide information regarding the framework of analyses that will be applied with the US3D analysis tool. In addition, comparisons will be made to entry flight testing data provided by the Orbiter BLT Flight Experiment and HYTHIRM projects. If data from digital scans of the Orbiter windward surface become available, simulations will also be performed to characterize the difference in surface heating between the CAD reference OML and the digitized surface provided by the surface scans.

  9. ICM: Bridging the Capability Gap between 1 January 2019 and the Replacement Munition

    DTIC Science & Technology

    2017-06-09

    fires. This qualitative case study focused on the ICM capability gap and potential solutions for the U.S. Field Artillery cannon and rocket systems... one of those countries, the United States could find itself committed in their defense. This case study will be used to provide insight to the...context of the potential ramifications of the 2008 DoD Policy on Cluster Munitions and Unintended Harm to Civilians, a case study on Russian military

  10. Are CMEs capable of producing Moreton waves? A case study: the 2006 December 6 event

    NASA Astrophysics Data System (ADS)

    Krause, G.; Cécere, M.; Zurbriggen, E.; Costa, A.; Francile, C.; Elaskar, S.

    2018-02-01

    Considering the chromosphere and a stratified corona, we examine, by performing 2D compressible magnetohydrodynamics simulations, the capability of a coronal mass ejection (CME) scenario to drive a Moreton wave. We find that given a typical flux rope (FR) magnetic configuration, in initial pseudo-equilibrium, the larger the magnetic field and the lighter (and hotter) the FR, the larger the amplitude and the speed of the chromospheric disturbance, which eventually becomes a Moreton wave. We present arguments to explain why Moreton waves are much rarer than CME occurrences. In the frame of the present model, we explicitly exclude the action of flares that could be associated with the CME. Analysing the Mach number, we find that only fast magnetosonic shock waves will be able to produce Moreton events. In these cases an overexpansion of the FR is always present and it is the main factor responsible for the Moreton generation. Finally, we show that this scenario can account for the Moreton wave of the 2006 December 6 event (Francile et al. 2013).

  11. A Social Diffusion Model with an Application on Election Simulation

    PubMed Central

    Wang, Fu-Min; Hung, San-Chuan; Kung, Perng-Hwa; Lin, Shou-De

    2014-01-01

    Issues about opinion diffusion have been studied for decades. It has so far no empirical approach to model the interflow and formation of crowd's opinion in elections due to two reasons. First, unlike the spread of information or flu, individuals have their intrinsic attitudes to election candidates in advance. Second, opinions are generally simply assumed as single values in most diffusion models. However, in this case, an opinion should represent preference toward multiple candidates. Previously done models thus may not intuitively interpret such scenario. This work is to design a diffusion model which is capable of managing the aforementioned scenario. To demonstrate the usefulness of our model, we simulate the diffusion on the network built based on a publicly available bibliography dataset. We compare the proposed model with other well-known models such as independent cascade. It turns out that our model consistently outperforms other models. We additionally investigate electoral issues with our model simulator. PMID:24995351

  12. Physically based modeling of rainfall-triggered landslides: a case study in the Luquillo forest, Puerto Rico

    NASA Astrophysics Data System (ADS)

    Lepore, C.; Arnone, E.; Noto, L. V.; Sivandran, G.; Bras, R. L.

    2013-09-01

    This paper presents the development of a rainfall-triggered landslide module within an existing physically based spatially distributed ecohydrologic model. The model, tRIBS-VEGGIE (Triangulated Irregular Networks-based Real-time Integrated Basin Simulator and Vegetation Generator for Interactive Evolution), is capable of a sophisticated description of many hydrological processes; in particular, the soil moisture dynamics are resolved at a temporal and spatial resolution required to examine the triggering mechanisms of rainfall-induced landslides. The validity of the tRIBS-VEGGIE model to a tropical environment is shown with an evaluation of its performance against direct observations made within the study area of Luquillo Forest. The newly developed landslide module builds upon the previous version of the tRIBS landslide component. This new module utilizes a numerical solution to the Richards' equation (present in tRIBS-VEGGIE but not in tRIBS), which better represents the time evolution of soil moisture transport through the soil column. Moreover, the new landslide module utilizes an extended formulation of the factor of safety (FS) to correctly quantify the role of matric suction in slope stability and to account for unsaturated conditions in the evaluation of FS. The new modeling framework couples the capabilities of the detailed hydrologic model to describe soil moisture dynamics with the infinite slope model, creating a powerful tool for the assessment of rainfall-triggered landslide risk.

  13. Predicting language diversity with complex networks.

    PubMed

    Raducha, Tomasz; Gubiec, Tomasz

    2018-01-01

    We analyze the model of social interactions with coevolution of the topology and states of the nodes. This model can be interpreted as a model of language change. We propose different rewiring mechanisms and perform numerical simulations for each. Obtained results are compared with the empirical data gathered from two online databases and anthropological study of Solomon Islands. We study the behavior of the number of languages for different system sizes and we find that only local rewiring, i.e. triadic closure, is capable of reproducing results for the empirical data in a qualitative manner. Furthermore, we cancel the contradiction between previous models and the Solomon Islands case. Our results demonstrate the importance of the topology of the network, and the rewiring mechanism in the process of language change.

  14. Discrete statistical model of fatigue crack growth in a Ni-base superalloy, capable of life prediction

    NASA Astrophysics Data System (ADS)

    Boyd-Lee, Ashley; King, Julia

    1992-07-01

    A discrete statistical model of fatigue crack growth in a nickel base superalloy Waspaloy, which is quantitative from the start of the short crack regime to failure, is presented. Instantaneous crack growth rate distributions and persistence of arrest distributions are used to compute fatigue lives and worst case scenarios without extrapolation. The basis of the model is non-material specific, it provides an improved method of analyzing crack growth rate data. For Waspaloy, the model shows the importance of good bulk fatigue crack growth resistance to resist early short fatigue crack growth and the importance of maximizing crack arrest both by the presence of a proportion of small grains and by maximizing grain boundary corrugation.

  15. M-Split: A Graphical User Interface to Analyze Multilayered Anisotropy from Shear Wave Splitting

    NASA Astrophysics Data System (ADS)

    Abgarmi, Bizhan; Ozacar, A. Arda

    2017-04-01

    Shear wave splitting analysis are commonly used to infer deep anisotropic structure. For simple cases, obtained delay times and fast-axis orientations are averaged from reliable results to define anisotropy beneath recording seismic stations. However, splitting parameters show systematic variations with back azimuth in the presence of complex anisotropy and cannot be represented by average time delay and fast axis orientation. Previous researchers had identified anisotropic complexities at different tectonic settings and applied various approaches to model them. Most commonly, such complexities are modeled by using multiple anisotropic layers with priori constraints from geologic data. In this study, a graphical user interface called M-Split is developed to easily process and model multilayered anisotropy with capabilities to properly address the inherited non-uniqueness. M-Split program runs user defined grid searches through the model parameter space for two-layer anisotropy using formulation of Silver and Savage (1994) and creates sensitivity contour plots to locate local maximas and analyze all possible models with parameter tradeoffs. In order to minimize model ambiguity and identify the robust model parameters, various misfit calculation procedures are also developed and embedded to M-Split which can be used depending on the quality of the observations and their back-azimuthal coverage. Case studies carried out to evaluate the reliability of the program using real noisy data and for this purpose stations from two different networks are utilized. First seismic network is the Kandilli Observatory and Earthquake research institute (KOERI) which includes long term running permanent stations and second network comprises seismic stations deployed temporary as part of the "Continental Dynamics-Central Anatolian Tectonics (CD-CAT)" project funded by NSF. It is also worth to note that M-Split is designed as open source program which can be modified by users for additional capabilities or for other applications.

  16. Mapping environmental susceptibility to Saint Louis encephalitis virus, based on a decision tree model of remotely-sensed data.

    PubMed

    Rotela, Camilo H; Spinsanti, Lorena I; Lamfri, Mario A; Contigiani, Marta S; Almirón, Walter R; Scavuzzo, Carlos M

    2011-11-01

    In response to the first human outbreak (January May 2005) of Saint Louis encephalitis (SLE) virus in Córdoba province, Argentina, we developed an environmental SLE virus risk map for the capital, i.e. Córdoba city. The aim was to provide a map capable of detecting macro-environmental factors associated with the spatial distribution of SLE cases, based on remotely sensed data and a geographical information system. Vegetation, soil brightness, humidity status, distances to water-bodies and areas covered by vegetation were assessed based on pre-outbreak images provided by the Landsat 5TM satellite. A strong inverse relationship between the number of humans infected by SLEV and distance to high-vigor vegetation was noted. A statistical non-hierarchic decision tree model was constructed, based on environmental variables representing the areas surrounding patient residences. From this point of view, 18% of the city could be classified as being at high risk for SLEV infection, while 34% carried a low risk, or none at all. Taking the whole 2005 epidemic into account, 80% of the cases came from areas classified by the model as medium-high or high risk. Almost 46% of the cases were registered in high-risk areas, while there were no cases (0%) in areas affirmed as risk free.

  17. Predictors of the risk factors for suicide identified by the interpersonal-psychological theory of suicidal behaviour.

    PubMed

    Christensen, Helen; Batterham, Philip James; Mackinnon, Andrew J; Donker, Tara; Soubelet, Andrea

    2014-10-30

    The Interpersonal-Psychological Theory of Suicide (IPTS) has been supported by recent research. However, the nature of the models׳ three major constructs--perceived burdensomeness, thwarted belongingness and acquired capability - requires further investigation. In this paper, we test a number of hypotheses about the predictors and correlates of the IPTS constructs. Participants aged 32-38 from an Australian population-based longitudinal cohort study (n=1167) were assessed. IPTS constructs were measured by items from the Interpersonal Needs Questionnaire (INQ) and Acquired Capability for Suicide Scale (ACSS), alongside demographic and additional measures, measured concurrently or approximately 8 years earlier. Cross-sectional analyses evaluating the IPTS supported earlier work. Mental health was significantly related to all three IPTS constructs, but depression and anxiety caseness were associated only with perceived burdensomeness. Various social support measures were differentially associated with the three constructs. Stressful events and lifetime traumas had robust independent associations with acquired capability for suicide only. The IPTS model provides a useful framework for conceptualising suicide risk. The findings highlight the importance of perceived social support in suicide risk, identify the importance of personality and other factors as new avenues of research, and provide some validation for the independence of the constructs. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. The Application of a Trade Study Methodology to Determine Which Capabilities to Implement in a Test Facility Data Acquisition System Upgrade

    NASA Technical Reports Server (NTRS)

    McDougal, Kristopher J.

    2008-01-01

    More and more test programs are requiring high frequency measurements. Marshall Space Flight Center s Cold Flow Test Facility has an interest in acquiring such data. The acquisition of this data requires special hardware and capabilities. This document provides a structured trade study approach for determining which additional capabilities of a VXI-based data acquisition system should be utilized to meet the test facility objectives. The paper is focused on the trade study approach detailing and demonstrating the methodology. A case is presented in which a trade study was initially performed to provide a recommendation for the data system capabilities. Implementation details of the recommended alternative are briefly provided as well as the system s performance during a subsequent test program. The paper then addresses revisiting the trade study with modified alternatives and attributes to address issues that arose during the subsequent test program. Although the model does not identify a single best alternative for all sensitivities, the trade study process does provide a much better understanding. This better understanding makes it possible to confidently recommend Alternative 3 as the preferred alternative.

  19. Efficient modeling of Bragg coherent x-ray nanobeam diffraction

    DOE PAGES

    Hruszkewycz, S. O.; Holt, M. V.; Allain, M.; ...

    2015-07-02

    X-ray Bragg diffraction experiments that utilize tightly focused coherent beams produce complicated Bragg diffraction patterns that depend on scattering geometry, characteristics of the sample, and properties of the x-ray focusing optic. In this paper, we use a Fourier-transform-based method of modeling the 2D intensity distribution of a Bragg peak and apply it to the case of thin films illuminated with a Fresnel zone plate in three different Bragg scattering geometries. Finally, the calculations agree well with experimental coherent diffraction patterns, demonstrating that nanodiffraction patterns can be modeled at nonsymmetric Bragg conditions with this approach—a capability critical for advancing nanofocused x-raymore » diffraction microscopy.« less

  20. Hierarchical Ada robot programming system (HARPS)- A complete and working telerobot control system based on the NASREM model

    NASA Technical Reports Server (NTRS)

    Leake, Stephen; Green, Tom; Cofer, Sue; Sauerwein, Tim

    1989-01-01

    HARPS is a telerobot control system that can perform some simple but useful tasks. This capability is demonstrated by performing the ORU exchange demonstration. HARPS is based on NASREM (NASA Standard Reference Model). All software is developed in Ada, and the project incorporates a number of different CASE (computer-aided software engineering) tools. NASREM was found to be a valid and useful model for building a telerobot control system. Its hierarchical and distributed structure creates a natural and logical flow for implementing large complex robust control systems. The ability of Ada to create and enforce abstraction enhanced the implementation of such control systems.

  1. An optoelectric professional's training model based on Unity of Knowing and Doing theory

    NASA Astrophysics Data System (ADS)

    Qin, Shiqiao; Wu, Wei; Zheng, Jiaxing; Wang, Xingshu; Zhao, Yingwei

    2017-08-01

    The "Unity of Knowing and Doing" (UKD) theory is proposed by an ancient Chinese philosopher, Wang Shouren, in 1508, which explains how to unify knowledge and practice. Different from the Chinese traditional UKD theory, the international higher education usually treats knowledge and practice as independent, and puts more emphasis on knowledge. Oriented from the UKD theory, the College of Opto-electric Science and Engineering (COESE) at National University of Defense Technology (NUDT) explores a novel training model in cultivating opto-electric professionals from the aspects of classroom teaching, practice experiment, system experiment, design experiment, research experiment and innovation experiment (CPSDRI). This model aims at promoting the unity of knowledge and practice, takes how to improve the students' capability as the main concern and tries to enhance the progress from cognition to professional action competence. It contains two hierarchies: cognition (CPS) and action competence (DRI). In the cognition hierarchy, students will focus on learning and mastering the professional knowledge of optics, opto-electric technology, laser, computer, electronics and machine through classroom teaching, practice experiment and system experiment (CPS). Great attention will be paid to case teaching, which links knowledge with practice. In the action competence hierarchy, emphasis will be placed on promoting students' capability of using knowledge to solve practical problems through design experiment, research experiment and innovation experiment (DRI). In this model, knowledge is divided into different modules and capability is cultivated on different levels. It combines classroom teaching and experimental teaching in a synergetic way and unifies cognition and practice, which is a valuable reference to the opto-electric undergraduate professionals' cultivation.

  2. Aerosol specification in single-column Community Atmosphere Model version 5

    DOE PAGES

    Lebassi-Habtezion, B.; Caldwell, P. M.

    2015-03-27

    Single-column model (SCM) capability is an important tool for general circulation model development. In this study, the SCM mode of version 5 of the Community Atmosphere Model (CAM5) is shown to handle aerosol initialization and advection improperly, resulting in aerosol, cloud-droplet, and ice crystal concentrations which are typically much lower than observed or simulated by CAM5 in global mode. This deficiency has a major impact on stratiform cloud simulations but has little impact on convective case studies because aerosol is currently not used by CAM5 convective schemes and convective cases are typically longer in duration (so initialization is less important).more » By imposing fixed aerosol or cloud-droplet and crystal number concentrations, the aerosol issues described above can be avoided. Sensitivity studies using these idealizations suggest that the Meyers et al. (1992) ice nucleation scheme prevents mixed-phase cloud from existing by producing too many ice crystals. Microphysics is shown to strongly deplete cloud water in stratiform cases, indicating problems with sequential splitting in CAM5 and the need for careful interpretation of output from sequentially split climate models. Droplet concentration in the general circulation model (GCM) version of CAM5 is also shown to be far too low (~ 25 cm −3) at the southern Great Plains (SGP) Atmospheric Radiation Measurement (ARM) site.« less

  3. Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases.

    PubMed

    Neal, Maxwell L; Carlson, Brian E; Thompson, Christopher T; James, Ryan C; Kim, Karam G; Tran, Kenneth; Crampin, Edmund J; Cook, Daniel L; Gennari, John H

    2015-01-01

    Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen's semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the "Pandit-Hinch-Niederer" (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach.

  4. Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases

    PubMed Central

    Neal, Maxwell L.; Carlson, Brian E.; Thompson, Christopher T.; James, Ryan C.; Kim, Karam G.; Tran, Kenneth; Crampin, Edmund J.; Cook, Daniel L.; Gennari, John H.

    2015-01-01

    Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen’s semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the “Pandit-Hinch-Niederer” (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach. PMID:26716837

  5. Data Assimilation as a Tool for Developing a Mars International Reference Atmosphere

    NASA Technical Reports Server (NTRS)

    Houben, Howard

    2005-01-01

    A new paradigm for a Mars International Reference Atmosphere is proposed. In general, as is certainly now the case for Mars, there are sufficient observational data to specify what the full atmospheric state was under a variety of circumstances (season, dustiness, etc.). There are also general circulation models capable of deter- mining the evolution of these states. If these capabilities are combined-using data assimilation techniques-the resulting analyzed states can be probed to answer a wide variety of questions, whether posed by scientists, mission planners, or others. This system would fulfill all the purposes of an international reference atmosphere and would make the scientific results of exploration missions readily available to the community. Preliminary work on a website that would incorporate this functionality has begun.

  6. Numerical Modeling of River Ice Processes on the Lower Nelson River

    NASA Astrophysics Data System (ADS)

    Malenchak, Jarrod Joseph

    Water resource infrastructure in cold regions of the world can be significantly impacted by the existence of river ice. Major engineering concerns related to river ice include ice jam flooding, the design and operation of hydropower facilities and other hydraulic structures, water supplies, as well as ecological, environmental, and morphological effects. The use of numerical simulation models has been identified as one of the most efficient means by which river ice processes can be studied and the effects of river ice be evaluated. The continued advancement of these simulation models will help to develop new theories and evaluate potential mitigation alternatives for these ice issues. In this thesis, a literature review of existing river ice numerical models, of anchor ice formation and modeling studies, and of aufeis formation and modeling studies is conducted. A high level summary of the two-dimensional CRISSP numerical model is presented as well as the developed freeze-up model with a focus specifically on the anchor ice and aufeis growth processes. This model includes development in the detailed heat transfer calculations, an improved surface ice mass exchange model which includes the rapids entrainment process, and an improved dry bed treatment model along with the expanded anchor ice and aufeis growth model. The developed sub-models are tested in an ideal channel setting as somewhat of a model confirmation. A case study of significant anchor ice and aufeis growth on the Nelson River in northern Manitoba, Canada, will be the primary field test case for the anchor ice and aufeis model. A second case study on the same river will be used to evaluate the surface ice components of the model in a field setting. The results from these cases studies will be used to highlight the capabilities and deficiencies in the numerical model and to identify areas of further research and model development.

  7. Neural network identification of aircraft nonlinear aerodynamic characteristics

    NASA Astrophysics Data System (ADS)

    Egorchev, M. V.; Tiumentsev, Yu V.

    2018-02-01

    The simulation problem for the controlled aircraft motion is considered in the case of imperfect knowledge of the modeling object and its operating conditions. The work aims to develop a class of modular semi-empirical dynamic models that combine the capabilities of theoretical and neural network modeling. We consider the use of semi-empirical neural network models for solving the problem of identifying aerodynamic characteristics of an aircraft. We also discuss the formation problem for a representative set of data characterizing the behavior of a simulated dynamic system, which is one of the critical tasks in the synthesis of ANN-models. The effectiveness of the proposed approach is demonstrated using a simulation example of the aircraft angular motion and identifying the corresponding coefficients of aerodynamic forces and moments.

  8. Enhanced capabilities and modified users manual for axial-flow compressor conceptual design code CSPAN

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.; Lavelle, Thomas M.

    1995-01-01

    Modifications made to the axial-flow compressor conceptual design code CSPAN are documented in this report. Endwall blockage and stall margin predictions were added. The loss-coefficient model was upgraded. Default correlations for rotor and stator solidity and aspect-ratio inputs and for stator-exit tangential velocity inputs were included in the code along with defaults for aerodynamic design limits. A complete description of input and output along with sample cases are included.

  9. The Forest Health Initiative, American chestnut (Castanea dentata) as a model for forest tree restoration: Biological Research Program

    Treesearch

    C. Dana Nelson; W.A. Powell; C.A. Maynard; K.M. Baier; A. Newhouse; S.A. Merkle; C.J. Nairn; L. Kong; J.E. Carlson; C. Addo-Quaye; M.E. Staton; F.V. Hebard; L.L. Georgi; A.G. Abbott; B.A. Olukolu; T. Zhebentyayeva

    2013-01-01

    The Forest Health Initiative (FHI) was developed and implemented to test the hypothesis that a coordinated effort in biotechnology research could lead to resistant trees capable of restoring a species in a relevant time frame. As a test case, the American chestnut (Castanea dentata) was chosen for study as it is an iconic forest tree species in the eastern United...

  10. Communications, Navigation, and Surveillance Models in ACES: Design Implementation and Capabilities

    NASA Technical Reports Server (NTRS)

    Kubat, Greg; Vandrei, Don; Satapathy, Goutam; Kumar, Anil; Khanna, Manu

    2006-01-01

    Presentation objectives include: a) Overview of the ACES/CNS System Models Design and Integration; b) Configuration Capabilities available for Models and Simulations using ACES with CNS Modeling; c) Descriptions of recently added, Enhanced CNS Simulation Capabilities; and d) General Concepts Ideas that Utilize CNS Modeling to Enhance Concept Evaluations.

  11. Demonstration of load rating capabilities through physical load testing : Sioux County bridge case study.

    DOT National Transportation Integrated Search

    2013-08-01

    The objective of this work, Pilot Project - Demonstration of Capabilities and Benefits of Bridge Load Rating through Physical Testing, was to demonstrate the capabilities for load testing and rating bridges in Iowa, study the economic benefit of perf...

  12. Demonstration of load rating capabilities through physical load testing : Ida County bridge case study.

    DOT National Transportation Integrated Search

    2013-08-01

    The objective of this work, Pilot Project - Demonstration of Capabilities and Benefits of Bridge Load Rating through Physical Testing, was to demonstrate the capabilities for load testing and rating bridges in Iowa, study the economic benefit of perf...

  13. Demonstration of load rating capabilities through physical load testing : Johnson County bridge case study.

    DOT National Transportation Integrated Search

    2013-08-01

    The objective of this work, Pilot Project - Demonstration of Capabilities and Benefits of Bridge Load Rating through Physical Testing, was to demonstrate the capabilities for load testing and rating bridges in Iowa, study the economic benefit of perf...

  14. Managing Algorithmic Skeleton Nesting Requirements in Realistic Image Processing Applications: The Case of the SKiPPER-II Parallel Programming Environment's Operating Model

    NASA Astrophysics Data System (ADS)

    Coudarcher, Rémi; Duculty, Florent; Serot, Jocelyn; Jurie, Frédéric; Derutin, Jean-Pierre; Dhome, Michel

    2005-12-01

    SKiPPER is a SKeleton-based Parallel Programming EnviRonment being developed since 1996 and running at LASMEA Laboratory, the Blaise-Pascal University, France. The main goal of the project was to demonstrate the applicability of skeleton-based parallel programming techniques to the fast prototyping of reactive vision applications. This paper deals with the special features embedded in the latest version of the project: algorithmic skeleton nesting capabilities and a fully dynamic operating model. Throughout the case study of a complete and realistic image processing application, in which we have pointed out the requirement for skeleton nesting, we are presenting the operating model of this feature. The work described here is one of the few reported experiments showing the application of skeleton nesting facilities for the parallelisation of a realistic application, especially in the area of image processing. The image processing application we have chosen is a 3D face-tracking algorithm from appearance.

  15. The flow of power law fluids in elastic networks and porous media.

    PubMed

    Sochi, Taha

    2016-02-01

    The flow of power law fluids, which include shear thinning and shear thickening as well as Newtonian as a special case, in networks of interconnected elastic tubes is investigated using a residual-based pore scale network modeling method with the employment of newly derived formulae. Two relations describing the mechanical interaction between the local pressure and local cross-sectional area in distensible tubes of elastic nature are considered in the derivation of these formulae. The model can be used to describe shear dependent flows of mainly viscous nature. The behavior of the proposed model is vindicated by several tests in a number of special and limiting cases where the results can be verified quantitatively or qualitatively. The model, which is the first of its kind, incorporates more than one major nonlinearity corresponding to the fluid rheology and conduit mechanical properties, that is non-Newtonian effects and tube distensibility. The formulation, implementation, and performance indicate that the model enjoys certain advantages over the existing models such as being exact within the restricting assumptions on which the model is based, easy implementation, low computational costs, reliability, and smooth convergence. The proposed model can, therefore, be used as an alternative to the existing Newtonian distensible models; moreover, it stretches the capabilities of the existing modeling approaches to reach non-Newtonian rheologies.

  16. Dynamic and Thermal Turbulent Time Scale Modelling for Homogeneous Shear Flows

    NASA Technical Reports Server (NTRS)

    Schwab, John R.; Lakshminarayana, Budugur

    1994-01-01

    A new turbulence model, based upon dynamic and thermal turbulent time scale transport equations, is developed and applied to homogeneous shear flows with constant velocity and temperature gradients. The new model comprises transport equations for k, the turbulent kinetic energy; tau, the dynamic time scale; k(sub theta), the fluctuating temperature variance; and tau(sub theta), the thermal time scale. It offers conceptually parallel modeling of the dynamic and thermal turbulence at the two equation level, and eliminates the customary prescription of an empirical turbulent Prandtl number, Pr(sub t), thus permitting a more generalized prediction capability for turbulent heat transfer in complex flows and geometries. The new model also incorporates constitutive relations, based upon invariant theory, that allow the effects of nonequilibrium to modify the primary coefficients for the turbulent shear stress and heat flux. Predictions of the new model, along with those from two other similar models, are compared with experimental data for decaying homogeneous dynamic and thermal turbulence, homogeneous turbulence with constant temperature gradient, and homogeneous turbulence with constant temperature gradient and constant velocity gradient. The new model offers improvement in agreement with the data for most cases considered in this work, although it was no better than the other models for several cases where all the models performed poorly.

  17. Model-based system-of-systems engineering for space-based command, control, communication, and information architecture design

    NASA Astrophysics Data System (ADS)

    Sindiy, Oleg V.

    This dissertation presents a model-based system-of-systems engineering (SoSE) approach as a design philosophy for architecting in system-of-systems (SoS) problems. SoS refers to a special class of systems in which numerous systems with operational and managerial independence interact to generate new capabilities that satisfy societal needs. Design decisions are more complicated in a SoS setting. A revised Process Model for SoSE is presented to support three phases in SoS architecting: defining the scope of the design problem, abstracting key descriptors and their interrelations in a conceptual model, and implementing computer-based simulations for architectural analyses. The Process Model enables improved decision support considering multiple SoS features and develops computational models capable of highlighting configurations of organizational, policy, financial, operational, and/or technical features. Further, processes for verification and validation of SoS models and simulations are also important due to potential impact on critical decision-making and, thus, are addressed. Two research questions frame the research efforts described in this dissertation. The first concerns how the four key sources of SoS complexity---heterogeneity of systems, connectivity structure, multi-layer interactions, and the evolutionary nature---influence the formulation of SoS models and simulations, trade space, and solution performance and structure evaluation metrics. The second question pertains to the implementation of SoSE architecting processes to inform decision-making for a subset of SoS problems concerning the design of information exchange services in space-based operations domain. These questions motivate and guide the dissertation's contributions. A formal methodology for drawing relationships within a multi-dimensional trade space, forming simulation case studies from applications of candidate architecture solutions to a campaign of notional mission use cases, and executing multi-purpose analysis studies is presented. These efforts are coupled to the generation of aggregate and time-dependent solution performance metrics via the hierarchical decomposition of objectives and the analytical recomposition of multi-attribute qualitative program drivers from quantifiable measures. This methodology was also applied to generate problem-specific solution structure evaluation metrics that facilitate the comparison of alternate solutions at a high level of aggregation, at lower levels of abstraction, and to relate options for design variables with associated performance values. For proof-of-capability demonstration, the selected application problem concerns the design of command, control, communication, and information (C3I) architecture services for a notional campaign of crewed and robotic lunar surface missions. The impetus for the work was the demonstration of using model-based SoSE for design of sustainable interoperability capabilities between all data and communication assets in extended lunar campaigns. A comprehensive Lunar C3I simulation tool was developed by a team of researchers at Purdue University in support of NASA's Constellation Program; the author of this dissertation was a key contributor to the creation of this tool and made modifications and extensions to key components relevant to the methodological concepts presented in this dissertation. The dissertation concludes with a presentation of example results based on the interrogation of the constructed Lunar C3I computational model. The results are based on a family of studies, structured around a trade-tree of architecture options, which were conducted to test the hypothesis that the SoSE approach is efficacious in the information-exchange architecture design in space exploration domain. Included in the family of proof-of-capability studies is a simulation of the Apollo 17 mission, which allows not only for partial verification and validation of the model, but also provides insights for prioritizing future model design iterations to make it more realistic representation of the "real world." A caveat within the results presented is that they serve within the capacity of a proof-of-capability demonstration, and as such, they are a product of models and analyses that need further development before the tool's results can be employed for decision-making. Additional discussion is provided for how to further develop and validate the Lunar C3I tool and also to make it extensible to other SoS design problems of similar nature in space exploration and other problem application domains.

  18. Spatiotemporal multivariate mixture models for Bayesian model selection in disease mapping.

    PubMed

    Lawson, A B; Carroll, R; Faes, C; Kirby, R S; Aregay, M; Watjou, K

    2017-12-01

    It is often the case that researchers wish to simultaneously explore the behavior of and estimate overall risk for multiple, related diseases with varying rarity while accounting for potential spatial and/or temporal correlation. In this paper, we propose a flexible class of multivariate spatio-temporal mixture models to fill this role. Further, these models offer flexibility with the potential for model selection as well as the ability to accommodate lifestyle, socio-economic, and physical environmental variables with spatial, temporal, or both structures. Here, we explore the capability of this approach via a large scale simulation study and examine a motivating data example involving three cancers in South Carolina. The results which are focused on four model variants suggest that all models possess the ability to recover simulation ground truth and display improved model fit over two baseline Knorr-Held spatio-temporal interaction model variants in a real data application.

  19. Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application

    NASA Astrophysics Data System (ADS)

    Chen, Jinduan; Boccelli, Dominic L.

    2018-02-01

    Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.

  20. Strontium-90 Biokinetics from Simulated Wound Intakes in Non-human Primates Compared with Combined Model Predictions from National Council on Radiation Protection and Measurements Report 156 and International Commission on Radiological Protection Publication 67.

    PubMed

    Allen, Mark B; Brey, Richard R; Gesell, Thomas; Derryberry, Dewayne; Poudel, Deepesh

    2016-01-01

    This study had a goal to evaluate the predictive capabilities of the National Council on Radiation Protection and Measurements (NCRP) wound model coupled to the International Commission on Radiological Protection (ICRP) systemic model for 90Sr-contaminated wounds using non-human primate data. Studies were conducted on 13 macaque (Macaca mulatta) monkeys, each receiving one-time intramuscular injections of 90Sr solution. Urine and feces samples were collected up to 28 d post-injection and analyzed for 90Sr activity. Integrated Modules for Bioassay Analysis (IMBA) software was configured with default NCRP and ICRP model transfer coefficients to calculate predicted 90Sr intake via the wound based on the radioactivity measured in bioassay samples. The default parameters of the combined models produced adequate fits of the bioassay data, but maximum likelihood predictions of intake were overestimated by a factor of 1.0 to 2.9 when bioassay data were used as predictors. Skeletal retention was also over-predicted, suggesting an underestimation of the excretion fraction. Bayesian statistics and Monte Carlo sampling were applied using IMBA to vary the default parameters, producing updated transfer coefficients for individual monkeys that improved model fit and predicted intake and skeletal retention. The geometric means of the optimized transfer rates for the 11 cases were computed, and these optimized sample population parameters were tested on two independent monkey cases and on the 11 monkeys from which the optimized parameters were derived. The optimized model parameters did not improve the model fit in most cases, and the predicted skeletal activity produced improvements in three of the 11 cases. The optimized parameters improved the predicted intake in all cases but still over-predicted the intake by an average of 50%. The results suggest that the modified transfer rates were not always an improvement over the default NCRP and ICRP model values.

  1. The Role of Universities in Strengthening Local Capabilities for Innovation--A Comparative Case Study

    ERIC Educational Resources Information Center

    Westnes, Petter; Hatakenaka, Sachi; Gjelsvik, Martin; Lester, Richard K.

    2009-01-01

    This article reports on a comparative case study of the role played by local universities and public research organizations in the development of local capabilities for innovation in two key gateways to the North Sea oil and gas province: the Stavanger region on the southwest coast of Norway and the Aberdeen region in northeast Scotland. These two…

  2. DEMONSTRATION OF LEACHXS/ORCHESTRA CAPABILITIES BY SIMULATING CONSTITUENT RELEASE FROM A CEMENTITIOUS WASTE FORM IN A REINFORCED CONCRETE VAULT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langton, C.; Meeussen, J.; Sloot, H.

    2010-03-31

    The objective of the work described in this report is to demonstrate the capabilities of the current version of LeachXS{trademark}/ORCHESTRA for simulating chemical behavior and constituent release processes in a range of applications that are relevant to the CBP. This report illustrates the use of LeachXS{trademark}/ORCHESTRA for the following applications: (1) Comparing model and experimental results for leaching tests for a range of cementitious materials including cement mortars, grout, stabilized waste, and concrete. The leaching test data includes liquid-solid partitioning as a function of pH and release rates based on laboratory column, monolith, and field testing. (2) Modeling chemical speciationmore » of constituents in cementitious materials, including liquid-solid partitioning and release rates. (3) Evaluating uncertainty in model predictions based on uncertainty in underlying composition, thermodynamic, and transport characteristics. (4) Generating predominance diagrams to evaluate predicted chemical changes as a result of material aging using the example of exposure to atmospheric conditions. (5) Modeling coupled geochemical speciation and diffusion in a three layer system consisting of a layer of Saltstone, a concrete barrier, and a layer of soil in contact with air. The simulations show developing concentration fronts over a time period of 1000 years. (6) Modeling sulfate attack and cracking due to ettringite formation. A detailed example for this case is provided in a separate article by the authors (Sarkar et al. 2010). Finally, based on the computed results, the sensitive input parameters for this type of modeling are identified and discussed. The chemical speciation behavior of substances is calculated for a batch system and also in combination with transport and within a three layer system. This includes release from a barrier to the surrounding soil as a function of time. As input for the simulations, the physical and chemical properties of the materials are used. The test cases used in this demonstration are taken from Reference Cases for Use in the Cementitious Barriers Partnership (Langton et al. 2009). Before it is possible to model the release of substances from stabilized waste or radioactive grout through a cement barrier into the engineered soil barrier or natural soil, the relevant characteristics of such materials must be known. Additional chemical characteristics are needed for mechanistic modeling to be undertaken, not just the physical properties relevant for modeling of transport. The minimum required properties for modeling are given in Section 5.0, 'Modeling the chemical speciation of a material'.« less

  3. Modeling the dynamic crush of impact mitigating materials

    NASA Astrophysics Data System (ADS)

    Logan, R. W.; McMichael, L. D.

    1995-05-01

    Crushable materials are commonly utilized in the design of structural components to absorb energy and mitigate shock during the dynamic impact of a complex structure, such as an automobile chassis or drum-type shipping container. The development and application of several finite-element material models which have been developed at various times at LLNL for DYNA3D are discussed. Between the models, they are able to account for several of the predominant mechanisms which typically influence the dynamic mechanical behavior of crushable materials. One issue we addressed was that no single existing model would account for the entire gambit of constitutive features which are important for crushable materials. Thus, we describe the implementation and use of an additional material model which attempts to provide a more comprehensive model of the mechanics of crushable material behavior. This model combines features of the pre-existing DYNA models and incorporates some new features as well in an invariant large-strain formulation. In addition to examining the behavior of a unit cell in uniaxial compression, two cases were chosen to evaluate the capabilities and accuracy of the various material models in DYNA. In the first case, a model for foam filled box beams was developed and compared to test data from a four-point bend test. The model was subsequently used to study its effectiveness in energy absorption in an aluminum extrusion, spaceframe, vehicle chassis. The second case examined the response of the AT-400A shipping container and the performance of the overpack material during accident environments selected from 10CFR71 and IAEA regulations.

  4. Scaling Law for Cross-stream Diffusion in Microchannels under Combined Electroosmotic and Pressure Driven Flow.

    PubMed

    Song, Hongjun; Wang, Yi; Pant, Kapil

    2013-01-01

    This paper presents an analytical study of the cross-stream diffusion of an analyte in a rectangular microchannel under combined electroosmotic flow (EOF) and pressure driven flow to investigate the heterogeneous transport behavior and spatially-dependent diffusion scaling law. An analytical model capable of accurately describing 3D steady-state convection-diffusion in microchannels with arbitrary aspect ratios is developed based on the assumption of the thin Electric Double Layer (EDL). The model is verified against high-fidelity numerical simulation in terms of flow velocity and analyte concentration profiles with excellent agreement (<0.5% relative error). An extensive parametric analysis is then undertaken to interrogate the effect of the combined flow velocity field on the transport behavior in both the positive pressure gradient (PPG) and negative pressure gradient (NPG) cases. For the first time, the evolution from the spindle-shaped concentration profile in the PPG case, via the stripe-shaped profile (pure EOF), and finally to the butterfly-shaped profile in the PPG case is obtained using the analytical model along with a quantitative depiction of the spatially-dependent diffusion layer thickness and scaling law across a wide range of the parameter space.

  5. Scaling Law for Cross-stream Diffusion in Microchannels under Combined Electroosmotic and Pressure Driven Flow

    PubMed Central

    Song, Hongjun; Wang, Yi; Pant, Kapil

    2012-01-01

    This paper presents an analytical study of the cross-stream diffusion of an analyte in a rectangular microchannel under combined electroosmotic flow (EOF) and pressure driven flow to investigate the heterogeneous transport behavior and spatially-dependent diffusion scaling law. An analytical model capable of accurately describing 3D steady-state convection-diffusion in microchannels with arbitrary aspect ratios is developed based on the assumption of the thin Electric Double Layer (EDL). The model is verified against high-fidelity numerical simulation in terms of flow velocity and analyte concentration profiles with excellent agreement (<0.5% relative error). An extensive parametric analysis is then undertaken to interrogate the effect of the combined flow velocity field on the transport behavior in both the positive pressure gradient (PPG) and negative pressure gradient (NPG) cases. For the first time, the evolution from the spindle-shaped concentration profile in the PPG case, via the stripe-shaped profile (pure EOF), and finally to the butterfly-shaped profile in the PPG case is obtained using the analytical model along with a quantitative depiction of the spatially-dependent diffusion layer thickness and scaling law across a wide range of the parameter space. PMID:23554584

  6. Strategic Capability Development in the Higher Education Sector

    ERIC Educational Resources Information Center

    Brown, Paul

    2004-01-01

    The research adopts a case study approach (in higher education) to investigate how strategic capabilities might be developed in an organisation through strategic management development (SMD). SMD is defined as "Management development interventions which are intended to enhance the strategic capability and corporate performance of an…

  7. Numerical Modeling and Experimental Analysis of Scale Horizontal Axis Marine Hydrokinetic (MHK) Turbines

    NASA Astrophysics Data System (ADS)

    Javaherchi, Teymour; Stelzenmuller, Nick; Seydel, Joseph; Aliseda, Alberto

    2013-11-01

    We investigate, through a combination of scale model experiments and numerical simulations, the evolution of the flow field around the rotor and in the wake of Marine Hydrokinetic (MHK) turbines. Understanding the dynamics of this flow field is the key to optimizing the energy conversion of single devices and the arrangement of turbines in commercially viable arrays. This work presents a comparison between numerical and experimental results from two different case studies of scaled horizontal axis MHK turbines (45:1 scale). In the first case study, we investigate the effect of Reynolds number (Re = 40,000 to 100,000) and Tip Speed Ratio (TSR = 5 to 12) variation on the performance and wake structure of a single turbine. In the second case, we study the effect of the turbine downstream spacing (5d to 14d) on the performance and wake development in a coaxial configuration of two turbines. These results provide insights into the dynamics of Horizontal Axis Hydrokinetic Turbines, and by extension to Horizontal Axis Wind Turbines in close proximity to each other, and highlight the capabilities and limitations of the numerical models. Once validated at laboratory scale, the numerical model can be used to address other aspects of MHK turbines at full scale. Supported by DOE through the National Northwest Marine Renewable Energy Center.

  8. New systematic methodology for incorporating dynamic heat transfer modelling in multi-phase biochemical reactors.

    PubMed

    Fernández-Arévalo, T; Lizarralde, I; Grau, P; Ayesa, E

    2014-09-01

    This paper presents a new modelling methodology for dynamically predicting the heat produced or consumed in the transformations of any biological reactor using Hess's law. Starting from a complete description of model components stoichiometry and formation enthalpies, the proposed modelling methodology has integrated successfully the simultaneous calculation of both the conventional mass balances and the enthalpy change of reaction in an expandable multi-phase matrix structure, which facilitates a detailed prediction of the main heat fluxes in the biochemical reactors. The methodology has been implemented in a plant-wide modelling methodology in order to facilitate the dynamic description of mass and heat throughout the plant. After validation with literature data, as illustrative examples of the capability of the methodology, two case studies have been described. In the first one, a predenitrification-nitrification dynamic process has been analysed, with the aim of demonstrating the easy integration of the methodology in any system. In the second case study, the simulation of a thermal model for an ATAD has shown the potential of the proposed methodology for analysing the effect of ventilation and influent characterization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. 3-dimensional orthodontics visualization system with dental study models and orthopantomograms

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ong, S. H.; Foong, K. W. C.; Dhar, T.

    2005-04-01

    The aim of this study is to develop a system that provides 3-dimensional visualization of orthodontic treatments. Dental plaster models and corresponding orthopantomogram (dental panoramic tomogram) are first digitized and fed into the system. A semi-auto segmentation technique is applied to the plaster models to detect the dental arches, tooth interstices and gum margins, which are used to extract individual crown models. 3-dimensional representation of roots, generated by deforming generic tooth models with orthopantomogram using radial basis functions, is attached to corresponding crowns to enable visualization of complete teeth. An optional algorithm to close the gaps between deformed roots and actual crowns by using multi-quadratic radial basis functions is also presented, which is capable of generating smooth mesh representation of complete 3-dimensional teeth. User interface is carefully designed to achieve a flexible system with as much user friendliness as possible. Manual calibration and correction is possible throughout the data processing steps to compensate occasional misbehaviors of automatic procedures. By allowing the users to move and re-arrange individual teeth (with their roots) on a full dentition, this orthodontic visualization system provides an easy and accurate way of simulation and planning of orthodontic treatment. Its capability of presenting 3-dimensional root information with only study models and orthopantomogram is especially useful for patients who do not undergo CT scanning, which is not a routine procedure in most orthodontic cases.

  10. Simulating Daily and Sub-daily Water Flow in Large, Semi-arid Watershed Using SWAT: A Case Study of Nueces River Basin, Texas

    NASA Astrophysics Data System (ADS)

    Bassam, S.; Ren, J.

    2015-12-01

    Runoff generated during heavy rainfall imposes quick, but often intense, changes in the flow of streams, which increase the chance of flash floods in the vicinity of the streams. Understanding the temporal response of streams to heavy rainfall requires a hydrological model that considers meteorological, hydrological, and geological components of the streams and their watersheds. SWAT is a physically-based, semi-distributed model that is capable of simulating water flow within watersheds with both long-term, i.e. annually and monthly, and short-term (daily and sub-daily) time scales. However, the capability of SWAT in sub-daily water flow modeling within large watersheds has not been studied much, compare to long-term and daily time scales. In this study we are investigating the water flow in a large, semi-arid watershed, Nueces River Basin (NRB) with the drainage area of 16950 mi2 located in South Texas, with daily and sub-daily time scales. The objectives of this study are: (1) simulating the response of streams to heavy, and often quick, rainfall, (2) evaluating SWAT performance in sub-daily modeling of water flow within a large watershed, and (3) examining means for model performance improvement during model calibration and verification based on results of sensitivity and uncertainty analysis. The results of this study can provide important information for water resources planning during flood seasons.

  11. Dengue Baidu Search Index data can improve the prediction of local dengue epidemic: A case study in Guangzhou, China

    PubMed Central

    Liu, Tao; Zhu, Guanghu; Lin, Hualiang; Zhang, Yonghui; He, Jianfeng; Deng, Aiping; Peng, Zhiqiang; Xiao, Jianpeng; Rutherford, Shannon; Xie, Runsheng; Zeng, Weilin; Li, Xing; Ma, Wenjun

    2017-01-01

    Background Dengue fever (DF) in Guangzhou, Guangdong province in China is an important public health issue. The problem was highlighted in 2014 by a large, unprecedented outbreak. In order to respond in a more timely manner and hence better control such potential outbreaks in the future, this study develops an early warning model that integrates internet-based query data into traditional surveillance data. Methodology and principal findings A Dengue Baidu Search Index (DBSI) was collected from the Baidu website for developing a predictive model of dengue fever in combination with meteorological and demographic factors. Generalized additive models (GAM) with or without DBSI were established. The generalized cross validation (GCV) score and deviance explained indexes, intraclass correlation coefficient (ICC) and root mean squared error (RMSE), were respectively applied to measure the fitness and the prediction capability of the models. Our results show that the DBSI with one-week lag has a positive linear relationship with the local DF occurrence, and the model with DBSI (ICC:0.94 and RMSE:59.86) has a better prediction capability than the model without DBSI (ICC:0.72 and RMSE:203.29). Conclusions Our study suggests that a DSBI combined with traditional disease surveillance and meteorological data can improve the dengue early warning system in Guangzhou. PMID:28263988

  12. The efficacy of calibrating hydrologic model using remotely sensed evapotranspiration and soil moisture for streamflow prediction

    NASA Astrophysics Data System (ADS)

    Kunnath-Poovakka, A.; Ryu, D.; Renzullo, L. J.; George, B.

    2016-04-01

    Calibration of spatially distributed hydrologic models is frequently limited by the availability of ground observations. Remotely sensed (RS) hydrologic information provides an alternative source of observations to inform models and extend modelling capability beyond the limits of ground observations. This study examines the capability of RS evapotranspiration (ET) and soil moisture (SM) in calibrating a hydrologic model and its efficacy to improve streamflow predictions. SM retrievals from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and daily ET estimates from the CSIRO MODIS ReScaled potential ET (CMRSET) are used to calibrate a simplified Australian Water Resource Assessment - Landscape model (AWRA-L) for a selection of parameters. The Shuffled Complex Evolution Uncertainty Algorithm (SCE-UA) is employed for parameter estimation at eleven catchments in eastern Australia. A subset of parameters for calibration is selected based on the variance-based Sobol' sensitivity analysis. The efficacy of 15 objective functions for calibration is assessed based on streamflow predictions relative to control cases, and relative merits of each are discussed. Synthetic experiments were conducted to examine the effect of bias in RS ET observations on calibration. The objective function containing the root mean square deviation (RMSD) of ET result in best streamflow predictions and the efficacy is superior for catchments with medium to high average runoff. Synthetic experiments revealed that accurate ET product can improve the streamflow predictions in catchments with low average runoff.

  13. Differential equation models for sharp threshold dynamics.

    PubMed

    Schramm, Harrison C; Dimitrov, Nedialko B

    2014-01-01

    We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. Published by Elsevier Inc.

  14. a Gaussian Process Based Multi-Person Interaction Model

    NASA Astrophysics Data System (ADS)

    Klinger, T.; Rottensteiner, F.; Heipke, C.

    2016-06-01

    Online multi-person tracking in image sequences is commonly guided by recursive filters, whose predictive models define the expected positions of future states. When a predictive model deviates too much from the true motion of a pedestrian, which is often the case in crowded scenes due to unpredicted accelerations, the data association is prone to fail. In this paper we propose a novel predictive model on the basis of Gaussian Process Regression. The model takes into account the motion of every tracked pedestrian in the scene and the prediction is executed with respect to the velocities of all interrelated persons. As shown by the experiments, the model is capable of yielding more plausible predictions even in the presence of mutual occlusions or missing measurements. The approach is evaluated on a publicly available benchmark and outperforms other state-of-the-art trackers.

  15. Final Report: Assessment of Combined Heat and Power Premium Power Applications in California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norwood, Zack; Lipman, Tim; Marnay, Chris

    2008-09-30

    This report analyzes the current economic and environmental performance of combined heat and power (CHP) systems in power interruption intolerant commercial facilities. Through a series of three case studies, key trade-offs are analyzed with regard to the provision of black-out ridethrough capability with the CHP systems and the resutling ability to avoid the need for at least some diesel backup generator capacity located at the case study sites. Each of the selected sites currently have a CHP or combined heating, cooling, and power (CCHP) system in addition to diesel backup generators. In all cases the CHP/CCHP system have a smallmore » fraction of the electrical capacity of the diesel generators. Although none of the selected sites currently have the ability to run the CHP systems as emergency backup power, all could be retrofitted to provide this blackout ride-through capability, and new CHP systems can be installed with this capability. The following three sites/systems were used for this analysis: (1) Sierra Nevada Brewery - Using 1MW of installed Molten Carbonate Fuel Cells operating on a combination of digestor gas (from the beer brewing process) and natural gas, this facility can produce electricty and heat for the brewery and attached bottling plant. The major thermal load on-site is to keep the brewing tanks at appropriate temperatures. (2) NetApp Data Center - Using 1.125 MW of Hess Microgen natural gas fired reciprocating engine-generators, with exhaust gas and jacket water heat recovery attached to over 300 tons of of adsorption chillers, this combined cooling and power system provides electricity and cooling to a data center with a 1,200 kW peak electrical load. (3) Kaiser Permanente Hayward Hospital - With 180kW of Tecogen natural gas fired reciprocating engine-generators this CHP system generates steam for space heating, and hot water for a city hospital. For all sites, similar assumptions are made about the economic and technological constraints of the power generation system. Using the Distributed Energy Resource Customer Adoption Model (DER-CAM) developed at the Lawrence Berkeley National Laboratory, we model three representative scenarios and find the optimal operation scheduling, yearly energy cost, and energy technology investments for each scenario below: Scenario 1 - Diesel generators and CHP/CCHP equipment as installed in the current facility. Scenario 1 represents a baseline forced investment in currently installed energy equipment. Scenario 2 - Existing CHP equipment installed with blackout ride-through capability to replace approximately the same capacity of diesel generators. In Scenario 2 the cost of the replaced diesel units is saved, however additional capital cost for the controls and switchgear for blackout ride-through capability is necessary. Scenario 3 - Fully optimized site analysis, allowing DER-CAM to specify the number of diesel and CHP/CCHP units (with blackout ride-through capability) that should be installed ignoring any constraints on backup generation. Scenario 3 allows DER-CAM to optimize scheduling and number of generation units from the currently available technologies at a particular site. The results of this analysis, using real data to model the optimal schedulding of hypothetical and actual CHP systems for a brewery, data center, and hospital, lead to some interesting conclusions. First, facilities with high heating loads will typically prove to be the most appropriate for CHP installation from a purely economic standpoint. Second, absorption/adsorption cooling systems may only be economically feasible if the technology for these chillers can increase above current best system efficiency. At a coefficient of performance (COP) of 0.8, for instance, an adsorption chiller paired with a natural gas generator with waste heat recovery at a facility with large cooling loads, like a data center, will cost no less on a yearly basis than purchasing electricity and natural gas directly from a utility. Third, at marginal additional cost, if the reliability of CHP systems proves to be at least as high as diesel generators (which we expect to be the case), the CHP system could replace the diesel generator at little or no additional cost. This is true if the thermal to electric (relative) load of those facilities was already high enough to economically justify a CHP system. Last, in terms of greenhouse gas emissions, the modeled CHP and CCHP systems provide some degree of decreased emissions relative to systems with less CHP installed. The emission reduction can be up to 10% in the optimized case (Scenario 3) in the application with the highest relative thermal load, in this case the hospital. Although these results should be qualified because they are only based on the three case studies, the general results and lessons learned are expected to be applicable across a broad range of potential and existing CCHP systems.« less

  16. Science and applications-driven OSSE platform for terrestrial hydrology using NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Peters-Lidard, C. D.; Harrison, K.; Santanello, J. A.; Bach Kirschbaum, D.

    2014-12-01

    Observing System Simulation Experiments (OSSEs) are often conducted to evaluate the worth of existing data and data yet to be collected from proposed new missions. As missions increasingly require a broader ``Earth systems'' focus, it is important that the OSSEs capture the potential benefits of the observations on end-use applications. Towards this end, the results from the OSSEs must also be evaluated with a suite of metrics that capture the value, uncertainty, and information content of the observations while factoring in both science and societal impacts. In this presentation, we present the development of an end-to-end and end-use application oriented OSSE platform using the capabilities of the NASA Land Information System (LIS) developed for terrestrial hydrology. Four case studies that demonstrate the capabilities of the system will be presented: (1) A soil moisture OSSE that employs simulated L-band measurements and examines their impacts towards applications such as floods and droughts. The experiment also uses a decision-theory based analysis to assess the economic utility of observations towards improving drought and flood risk estimates, (2) A GPM-relevant study quantifies the impact of improved precipitation retrievals from GPM towards improving landslide forecasts, (3) A case study that examines the utility of passive microwave soil moisture observations towards weather prediction, and (4) OSSEs used for developing science requirements for the GRACE-2 mission. These experiments also demonstrate the value of a comprehensive modeling environment such as LIS for conducting end-to-end OSSEs by linking satellite observations, physical models, data assimilation algorithms and end-use application models in a single integrated framework.

  17. SU-E-CAMPUS-T-04: Statistical Process Control for Patient-Specific QA in Proton Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LAH, J; SHIN, D; Kim, G

    Purpose: To evaluate and improve the reliability of proton QA process, to provide an optimal customized level using the statistical process control (SPC) methodology. The aim is then to suggest the suitable guidelines for patient-specific QA process. Methods: We investigated the constancy of the dose output and range to see whether it was within the tolerance level of daily QA process. This study analyzed the difference between the measured and calculated ranges along the central axis to suggest the suitable guidelines for patient-specific QA in proton beam by using process capability indices. In this study, patient QA plans were classifiedmore » into 6 treatment sites: head and neck (41 cases), spinal cord (29 cases), lung (28 cases), liver (30 cases), pancreas (26 cases), and prostate (24 cases). Results: The deviations for the dose output and range of daily QA process were ±0.84% and ±019%, respectively. Our results show that the patient-specific range measurements are capable at a specification limit of ±2% in all treatment sites except spinal cord cases. In spinal cord cases, comparison of process capability indices (Cp, Cpm, Cpk ≥1, but Cpmk ≤1) indicated that the process is capable, but not centered, the process mean deviates from its target value. The UCL (upper control limit), CL (center line) and LCL (lower control limit) for spinal cord cases were 1.37%, −0.27% and −1.89%, respectively. On the other hands, the range differences in prostate cases were good agreement between calculated and measured values. The UCL, CL and LCL for prostate cases were 0.57%, −0.11% and −0.78%, respectively. Conclusion: SPC methodology has potential as a useful tool to customize an optimal tolerance levels and to suggest the suitable guidelines for patient-specific QA in clinical proton beam.« less

  18. Data-Flow Based Model Analysis

    NASA Technical Reports Server (NTRS)

    Saad, Christian; Bauer, Bernhard

    2010-01-01

    The concept of (meta) modeling combines an intuitive way of formalizing the structure of an application domain with a high expressiveness that makes it suitable for a wide variety of use cases and has therefore become an integral part of many areas in computer science. While the definition of modeling languages through the use of meta models, e.g. in Unified Modeling Language (UML), is a well-understood process, their validation and the extraction of behavioral information is still a challenge. In this paper we present a novel approach for dynamic model analysis along with several fields of application. Examining the propagation of information along the edges and nodes of the model graph allows to extend and simplify the definition of semantic constraints in comparison to the capabilities offered by e.g. the Object Constraint Language. Performing a flow-based analysis also enables the simulation of dynamic behavior, thus providing an "abstract interpretation"-like analysis method for the modeling domain.

  19. The HART II International Workshop: An Assessment of the State-of-the-Art in Comprehensive Code Prediction

    NASA Technical Reports Server (NTRS)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2013-01-01

    Significant advancements in computational fluid dynamics (CFD) and their coupling with computational structural dynamics (CSD, or comprehensive codes) for rotorcraft applications have been achieved recently. Despite this, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this article, the capabilities of such codes are evaluated using the HART II International Workshop database, focusing on a typical descent operating condition which includes strong blade-vortex interactions. A companion article addresses the CFD/CSD coupled approach. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics-especially for the cases with HHC-and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  20. An Assessment of Comprehensive Code Prediction State-of-the-Art Using the HART II International Workshop Data

    NASA Technical Reports Server (NTRS)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2012-01-01

    Despite significant advancements in computational fluid dynamics and their coupling with computational structural dynamics (= CSD, or comprehensive codes) for rotorcraft applications, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this paper, the capabilities of such codes are evaluated using the HART II Inter- national Workshop data base, focusing on a typical descent operating condition which includes strong blade-vortex interactions. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics - especially for the cases with HHC - and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  1. Environmental Conditions Associated with Elevated Vibrio parahaemolyticus Concentrations in Great Bay Estuary, New Hampshire

    PubMed Central

    Urquhart, Erin A.; Jones, Stephen H.; Yu, Jong W.; Schuster, Brian M.; Marcinkiewicz, Ashley L.; Whistler, Cheryl A.; Cooper, Vaughn S.

    2016-01-01

    Reports from state health departments and the Centers for Disease Control and Prevention indicate that the annual number of reported human vibriosis cases in New England has increased in the past decade. Concurrently, there has been a shift in both the spatial distribution and seasonal detection of Vibrio spp. throughout the region based on limited monitoring data. To determine environmental factors that may underlie these emerging conditions, this study focuses on a long-term database of Vibrio parahaemolyticus concentrations in oyster samples generated from data collected from the Great Bay Estuary, New Hampshire over a period of seven consecutive years. Oyster samples from two distinct sites were analyzed for V. parahaemolyticus abundance, noting significant relationships with various biotic and abiotic factors measured during the same period of study. We developed a predictive modeling tool capable of estimating the likelihood of V. parahaemolyticus presence in coastal New Hampshire oysters. Results show that the inclusion of chlorophyll a concentration to an empirical model otherwise employing only temperature and salinity variables, offers improved predictive capability for modeling the likelihood of V. parahaemolyticus in the Great Bay Estuary. PMID:27144925

  2. A Parametric Study of Unsteady Rotor-Stator Interaction in a Simplified Francis Turbine

    NASA Astrophysics Data System (ADS)

    Wouden, Alex; Cimbala, John; Lewis, Bryan

    2011-11-01

    CFD analysis is becoming a critical stage in the design of hydroturbines. However, its capability to represent unsteady flow interactions between the rotor and stator (which requires a 360-degree, mesh-refined model of the turbine passage) is hindered. For CFD to become a more effective tool in predicting the performance of a hydroturbine, the key interactions between the rotor and stator need to be understood using current numerical methods. As a first step towards evaluating this unsteady behavior without the burden of a computationally expensive domain, the stator and Francis-type rotor blades are reduced to flat plates. Local and global variables are compared using periodic, semi-periodic, and 360-degree geometric models and various turbulence models (k-omega, k-epsilon, and Spalart-Allmaras). The computations take place within the OpenFOAM® environment and utilize a general grid interface (GGI) between the rotor and stator computational domains. The rotor computational domain is capable of dynamic rotation. The results demonstrate some of the strengths and limitations of utilizing CFD for hydroturbine analysis. These case studies will also serve as tutorials to help others learn how to use CFD for turbomachinery. This research is funded by a grant from the DOE.

  3. Advances in molecular quantum chemistry contained in the Q-Chem 4 program package

    NASA Astrophysics Data System (ADS)

    Shao, Yihan; Gan, Zhengting; Epifanovsky, Evgeny; Gilbert, Andrew T. B.; Wormit, Michael; Kussmann, Joerg; Lange, Adrian W.; Behn, Andrew; Deng, Jia; Feng, Xintian; Ghosh, Debashree; Goldey, Matthew; Horn, Paul R.; Jacobson, Leif D.; Kaliman, Ilya; Khaliullin, Rustam Z.; Kuś, Tomasz; Landau, Arie; Liu, Jie; Proynov, Emil I.; Rhee, Young Min; Richard, Ryan M.; Rohrdanz, Mary A.; Steele, Ryan P.; Sundstrom, Eric J.; Woodcock, H. Lee, III; Zimmerman, Paul M.; Zuev, Dmitry; Albrecht, Ben; Alguire, Ethan; Austin, Brian; Beran, Gregory J. O.; Bernard, Yves A.; Berquist, Eric; Brandhorst, Kai; Bravaya, Ksenia B.; Brown, Shawn T.; Casanova, David; Chang, Chun-Min; Chen, Yunqing; Chien, Siu Hung; Closser, Kristina D.; Crittenden, Deborah L.; Diedenhofen, Michael; DiStasio, Robert A., Jr.; Do, Hainam; Dutoi, Anthony D.; Edgar, Richard G.; Fatehi, Shervin; Fusti-Molnar, Laszlo; Ghysels, An; Golubeva-Zadorozhnaya, Anna; Gomes, Joseph; Hanson-Heine, Magnus W. D.; Harbach, Philipp H. P.; Hauser, Andreas W.; Hohenstein, Edward G.; Holden, Zachary C.; Jagau, Thomas-C.; Ji, Hyunjun; Kaduk, Benjamin; Khistyaev, Kirill; Kim, Jaehoon; Kim, Jihan; King, Rollin A.; Klunzinger, Phil; Kosenkov, Dmytro; Kowalczyk, Tim; Krauter, Caroline M.; Lao, Ka Un; Laurent, Adèle D.; Lawler, Keith V.; Levchenko, Sergey V.; Lin, Ching Yeh; Liu, Fenglai; Livshits, Ester; Lochan, Rohini C.; Luenser, Arne; Manohar, Prashant; Manzer, Samuel F.; Mao, Shan-Ping; Mardirossian, Narbe; Marenich, Aleksandr V.; Maurer, Simon A.; Mayhall, Nicholas J.; Neuscamman, Eric; Oana, C. Melania; Olivares-Amaya, Roberto; O'Neill, Darragh P.; Parkhill, John A.; Perrine, Trilisa M.; Peverati, Roberto; Prociuk, Alexander; Rehn, Dirk R.; Rosta, Edina; Russ, Nicholas J.; Sharada, Shaama M.; Sharma, Sandeep; Small, David W.; Sodt, Alexander; Stein, Tamar; Stück, David; Su, Yu-Chuan; Thom, Alex J. W.; Tsuchimochi, Takashi; Vanovschi, Vitalii; Vogt, Leslie; Vydrov, Oleg; Wang, Tao; Watson, Mark A.; Wenzel, Jan; White, Alec; Williams, Christopher F.; Yang, Jun; Yeganeh, Sina; Yost, Shane R.; You, Zhi-Qiang; Zhang, Igor Ying; Zhang, Xing; Zhao, Yan; Brooks, Bernard R.; Chan, Garnet K. L.; Chipman, Daniel M.; Cramer, Christopher J.; Goddard, William A., III; Gordon, Mark S.; Hehre, Warren J.; Klamt, Andreas; Schaefer, Henry F., III; Schmidt, Michael W.; Sherrill, C. David; Truhlar, Donald G.; Warshel, Arieh; Xu, Xin; Aspuru-Guzik, Alán; Baer, Roi; Bell, Alexis T.; Besley, Nicholas A.; Chai, Jeng-Da; Dreuw, Andreas; Dunietz, Barry D.; Furlani, Thomas R.; Gwaltney, Steven R.; Hsu, Chao-Ping; Jung, Yousung; Kong, Jing; Lambrecht, Daniel S.; Liang, WanZhen; Ochsenfeld, Christian; Rassolov, Vitaly A.; Slipchenko, Lyudmila V.; Subotnik, Joseph E.; Van Voorhis, Troy; Herbert, John M.; Krylov, Anna I.; Gill, Peter M. W.; Head-Gordon, Martin

    2015-01-01

    A summary of the technical advances that are incorporated in the fourth major release of the Q-Chem quantum chemistry program is provided, covering approximately the last seven years. These include developments in density functional theory methods and algorithms, nuclear magnetic resonance (NMR) property evaluation, coupled cluster and perturbation theories, methods for electronically excited and open-shell species, tools for treating extended environments, algorithms for walking on potential surfaces, analysis tools, energy and electron transfer modelling, parallel computing capabilities, and graphical user interfaces. In addition, a selection of example case studies that illustrate these capabilities is given. These include extensive benchmarks of the comparative accuracy of modern density functionals for bonded and non-bonded interactions, tests of attenuated second order Møller-Plesset (MP2) methods for intermolecular interactions, a variety of parallel performance benchmarks, and tests of the accuracy of implicit solvation models. Some specific chemical examples include calculations on the strongly correlated Cr2 dimer, exploring zeolite-catalysed ethane dehydrogenation, energy decomposition analysis of a charged ter-molecular complex arising from glycerol photoionisation, and natural transition orbitals for a Frenkel exciton state in a nine-unit model of a self-assembling nanotube.

  4. A model based method for recognizing psoas major muscles in torso CT images

    NASA Astrophysics Data System (ADS)

    Kamiya, Naoki; Zhou, Xiangrong; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi

    2010-03-01

    In aging societies, it is important to analyze age-related hypokinesia. A psoas major muscle has many important functional capabilities such as capacity of balance and posture control. These functions can be measured by its cross sectional area (CSA), volume, and thickness. However, these values are calculated manually in the clinical situation. The purpose of our study is to propose an automated recognition method of psoas major muscles in X-ray torso CT images. The proposed recognition process involves three steps: 1) determination of anatomical points such as the origin and insertion of the psoas major muscle, 2) generation of a shape model for the psoas major muscle, and 3) recognition of the psoas major muscles by use of the shape model. The model was built using quadratic function, and was fit to the anatomical center line of psoas major muscle. The shape model was generated using 20 CT cases and tested by 20 other CT cases. The applied database consisted of 12 male and 8 female cases from the ages of 40's to 80's. The average value of Jaccard similarity coefficient (JSC) values employed in the evaluation was 0.7. Our experimental results indicated that the proposed method was effective for a volumetric analysis and could be possible to be used for a quantitative measurement of psoas major muscles in CT images.

  5. Deep neural networks for direct, featureless learning through observation: The case of two-dimensional spin models

    NASA Astrophysics Data System (ADS)

    Mills, Kyle; Tamblyn, Isaac

    2018-03-01

    We demonstrate the capability of a convolutional deep neural network in predicting the nearest-neighbor energy of the 4 ×4 Ising model. Using its success at this task, we motivate the study of the larger 8 ×8 Ising model, showing that the deep neural network can learn the nearest-neighbor Ising Hamiltonian after only seeing a vanishingly small fraction of configuration space. Additionally, we show that the neural network has learned both the energy and magnetization operators with sufficient accuracy to replicate the low-temperature Ising phase transition. We then demonstrate the ability of the neural network to learn other spin models, teaching the convolutional deep neural network to accurately predict the long-range interaction of a screened Coulomb Hamiltonian, a sinusoidally attenuated screened Coulomb Hamiltonian, and a modified Potts model Hamiltonian. In the case of the long-range interaction, we demonstrate the ability of the neural network to recover the phase transition with equivalent accuracy to the numerically exact method. Furthermore, in the case of the long-range interaction, the benefits of the neural network become apparent; it is able to make predictions with a high degree of accuracy, and do so 1600 times faster than a CUDA-optimized exact calculation. Additionally, we demonstrate how the neural network succeeds at these tasks by looking at the weights learned in a simplified demonstration.

  6. A space-time scan statistic for detecting emerging outbreaks.

    PubMed

    Tango, Toshiro; Takahashi, Kunihiko; Kohriyama, Kazuaki

    2011-03-01

    As a major analytical method for outbreak detection, Kulldorff's space-time scan statistic (2001, Journal of the Royal Statistical Society, Series A 164, 61-72) has been implemented in many syndromic surveillance systems. Since, however, it is based on circular windows in space, it has difficulty correctly detecting actual noncircular clusters. Takahashi et al. (2008, International Journal of Health Geographics 7, 14) proposed a flexible space-time scan statistic with the capability of detecting noncircular areas. It seems to us, however, that the detection of the most likely cluster defined in these space-time scan statistics is not the same as the detection of localized emerging disease outbreaks because the former compares the observed number of cases with the conditional expected number of cases. In this article, we propose a new space-time scan statistic which compares the observed number of cases with the unconditional expected number of cases, takes a time-to-time variation of Poisson mean into account, and implements an outbreak model to capture localized emerging disease outbreaks more timely and correctly. The proposed models are illustrated with data from weekly surveillance of the number of absentees in primary schools in Kitakyushu-shi, Japan, 2006. © 2010, The International Biometric Society.

  7. Experimental and Analytical Characterization of the Macromechanical Response for Triaxial Braided Composite Materials

    NASA Technical Reports Server (NTRS)

    Littell, Justin D.

    2013-01-01

    Increasingly, carbon composite structures are being used in aerospace applications. Their highstrength, high-stiffness, and low-weight properties make them good candidates for replacing many aerospace structures currently made of aluminum or steel. Recently, many of the aircraft engine manufacturers have developed new commercial jet engines that will use composite fan cases. Instead of using traditional composite layup techniques, these new fan cases will use a triaxially braided pattern, which improves case performance. The impact characteristics of composite materials for jet engine fan case applications have been an important research topic because Federal regulations require that an engine case be able to contain a blade and blade fragments during an engine blade-out event. Once the impact characteristics of these triaxial braided materials become known, computer models can be developed to simulate a jet engine blade-out event, thus reducing cost and time in the development of these composite jet engine cases. The two main problems that have arisen in this area of research are that the properties for these materials have not been fully determined and computationally efficient computer models, which incorporate much of the microscale deformation and failure mechanisms, are not available. The research reported herein addresses some of the deficiencies present in previous research regarding these triaxial braided composite materials. The current research develops new techniques to accurately quantify the material properties of the triaxial braided composite materials. New test methods are developed for the polymer resin composite constituent and representative composite coupons. These methods expand previous research by using novel specimen designs along with using a noncontact measuring system that is also capable of identifying and quantifying many of the microscale failure mechanisms present in the materials. Finally, using the data gathered, a new hybrid micromacromechanical computer model is created to simulate the behavior of these composite material systems under static and ballistic impact loading using the test data acquired. The model also quantifies the way in which the fiber/matrix interface affects material response under static and impact loading. The results show that the test methods are capable of accurately quantifying the polymer resin under a variety of strain rates and temperature for three loading conditions. The resin strength and stiffness data show a clear rate and temperature dependence. The data also show the hydrostatic stress effects and hysteresis, all of which can be used by researchers developing composite constitutive models for the resins. The results for the composite data reveal noticeable differences in strength, failure strain, and stiffness in the different material systems presented. The investigations into the microscale failure mechanisms provide information about the nature of the different material system behaviors. Finally, the developed computer model predicts composite static strength and stiffness to within 10 percent of the gathered test data and also agrees with composite impact data, where available.

  8. Analytical skin friction and heat transfer formula for compressible internal flows

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.; Tattar, Marc J.

    1994-01-01

    An analytic, closed-form friction formula for turbulent, internal, compressible, fully developed flow was derived by extending the incompressible law-of-the-wall relation to compressible cases. The model is capable of analyzing heat transfer as a function of constant surface temperatures and surface roughness as well as analyzing adiabatic conditions. The formula reduces to Prandtl's law of friction for adiabatic, smooth, axisymmetric flow. In addition, the formula reduces to the Colebrook equation for incompressible, adiabatic, axisymmetric flow with various roughnesses. Comparisons with available experiments show that the model averages roughly 12.5 percent error for adiabatic flow and 18.5 percent error for flow involving heat transfer.

  9. Speech transformation system (spectrum and/or excitation) without pitch extraction

    NASA Astrophysics Data System (ADS)

    Seneff, S.

    1980-07-01

    A speech analysis synthesis system was developed which is capable of independent manipulation of the fundamental frequency and spectral envelope of a speech waveform. The system deconvolved the original speech with the spectral envelope estimate to obtain a model for the excitation, explicit pitch extraction was not required and as a consequence, the transformed speech was more natural sounding than would be the case if the excitation were modeled as a sequence of pulses. It is shown that the system has applications in the areas of voice modifications, baseband excited vocoders, time scale modifications, and frequency compression as an aid to the partially deaf.

  10. Complete analysis of steady and transient missile aerodynamic/propulsive/plume flowfield interactions

    NASA Astrophysics Data System (ADS)

    York, B. J.; Sinha, N.; Dash, S. M.; Hosangadi, A.; Kenzakowski, D. C.; Lee, R. A.

    1992-07-01

    The analysis of steady and transient aerodynamic/propulsive/plume flowfield interactions utilizing several state-of-the-art computer codes (PARCH, CRAFT, and SCHAFT) is discussed. These codes have been extended to include advanced turbulence models, generalized thermochemistry, and multiphase nonequilibrium capabilities. Several specialized versions of these codes have been developed for specific applications. This paper presents a brief overview of these codes followed by selected cases demonstrating steady and transient analyses of conventional as well as advanced missile systems. Areas requiring upgrades include turbulence modeling in a highly compressible environment and the treatment of particulates in general. Recent progress in these areas are highlighted.

  11. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2013-02-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modeling. In this paper we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modeling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalization property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally very efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analyzed on two real-world case studies (Marina catchment (Singapore) and Canning River (Western Australia)) representing two different morphoclimatic contexts comparatively with other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  12. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  13. Using WNTR to Model Water Distribution System Resilience ...

    EPA Pesticide Factsheets

    The Water Network Tool for Resilience (WNTR) is a new open source Python package developed by the U.S. Environmental Protection Agency and Sandia National Laboratories to model and evaluate resilience of water distribution systems. WNTR can be used to simulate a wide range of disruptive events, including earthquakes, contamination incidents, floods, climate change, and fires. The software includes the EPANET solver as well as a WNTR solver with the ability to model pressure-driven demand hydraulics, pipe breaks, component degradation and failure, changes to supply and demand, and cascading failure. Damage to individual components in the network (i.e. pipes, tanks) can be selected probabilistically using fragility curves. WNTR can also simulate different types of resilience-enhancing actions, including scheduled pipe repair or replacement, water conservation efforts, addition of back-up power, and use of contamination warning systems. The software can be used to estimate potential damage in a network, evaluate preparedness, prioritize repair strategies, and identify worse case scenarios. As a Python package, WNTR takes advantage of many existing python capabilities, including parallel processing of scenarios and graphics capabilities. This presentation will outline the modeling components in WNTR, demonstrate their use, give the audience information on how to get started using the code, and invite others to participate in this open source project. This pres

  14. The Seasat SAR Wind and Ocean Wave Monitoring Capabilities: A case study for pass 1339m

    NASA Technical Reports Server (NTRS)

    Beal, R. C.

    1980-01-01

    A well organized low energy 11 sec. swell system off the East Coast of the U.S. was detected with the Seasat Synthetic Aperture Radar and successfully tracked from deep water, across the continental shelf, and into shallow water. In addition, a less organized 7 sec. system was tentatively identified in the imagery. Both systems were independently confirmed with simultaneous wave spectral measurements from a research pier, aircraft laser profilometer data, and Fleet Numerical Spectral Ocean Wave Models.

  15. Ion Beam Analysis of Diffusion in Diamondlike Carbon Films

    NASA Astrophysics Data System (ADS)

    Chaffee, Kevin Paul

    The van de Graaf accelerator facility at Case Western Reserve University was developed into an analytical research center capable of performing Rutherford Backscattering Spectrometry, Elastic Recoil Detection Analysis for hydrogen profiling, Proton Enhanced Scattering, and ^4 He resonant scattering for ^{16 }O profiling. These techniques were applied to the study of Au, Na^+, Cs ^+, and H_2O water diffusion in a-C:H films. The results are consistent with the fully constrained network model of the microstructure as described by Angus and Jansen.

  16. Introduction to GRASP - General rotorcraft aeromechanical stability program - A modern approach to rotorcraft modeling

    NASA Technical Reports Server (NTRS)

    Hodges, D. H.; Hopkins, A. S.; Kunz, D. L.; Hinnant, H. E.

    1986-01-01

    The General Rotorcraft Aeromechanical Stability Program (GRASP), which is a hybrid between finite element programs and spacecraft-oriented multibody programs, is described in terms of its design and capabilities. Numerical results from GRASP are presented and compared with the results from an existing, special-purpose coupled rotor/body aeromechanical stability program and with experimental data of Dowell and Traybar (1975 and 1977) for large deflections of an end-loaded cantilevered beam. The agreement is excellent in both cases.

  17. A fuelwood plantation site selection procedure using geographic information system technology: A case study in support of the NASA Global Habitability Program

    NASA Technical Reports Server (NTRS)

    Roller, N. E. G.; Colwell, J. E.; Sellman, A. N.

    1985-01-01

    A study undertaken in support of NASA's Global Habitability Program is described. A demonstration of geographic information system (GIS) technology for site evaluation and selection is given. The objective was to locate potential fuelwood plantations within a 50 km radius of Nairobi, Kenya. A model was developed to evaluate site potential based on capability and suitability criteria and implemented using the Environmental Research Institute of Michigan's geographic information system.

  18. Statistical techniques for the characterization of partially observed epidemics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safta, Cosmin; Ray, Jaideep; Crary, David

    Techniques appear promising to construct and integrate automated detect-and-characterize technique for epidemics - Working off biosurveillance data, and provides information on the particular/ongoing outbreak. Potential use - in crisis management and planning, resource allocation - Parameter estimation capability ideal for providing the input parameters into an agent-based model, Index Cases, Time of Infection, infection rate. Non-communicable diseases are easier than communicable ones - Small anthrax can be characterized well with 7-10 days of data, post-detection; plague takes longer, Large attacks are very easy.

  19. 1DTempPro V2: new features for inferring groundwater/surface-water exchange

    USGS Publications Warehouse

    Koch, Franklin W.; Voytek, Emily B.; Day-Lewis, Frederick D.; Healy, Richard W.; Briggs, Martin A.; Lane, John W.; Werkema, Dale D.

    2016-01-01

    A new version of the computer program 1DTempPro extends the original code to include new capabilities for (1) automated parameter estimation, (2) layer heterogeneity, and (3) time-varying specific discharge. The code serves as an interface to the U.S. Geological Survey model VS2DH and supports analysis of vertical one-dimensional temperature profiles under saturated flow conditions to assess groundwater/surface-water exchange and estimate hydraulic conductivity for cases where hydraulic head is known.

  20. TEMPEST code simulations of hydrogen distribution in reactor containment structures. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trent, D.S.; Eyler, L.L.

    The mass transport version of the TEMPEST computer code was used to simulate hydrogen distribution in geometric configurations relevant to reactor containment structures. Predicted results of Battelle-Frankfurt hydrogen distribution tests 1 to 6, and 12 are presented. Agreement between predictions and experimental data is good. Best agreement is obtained using the k-epsilon turbulence model in TEMPEST in flow cases where turbulent diffusion and stable stratification are dominant mechanisms affecting transport. The code's general analysis capabilities are summarized.

  1. Brownian Motion at Lipid Membranes: A Comparison of Hydrodynamic Models Describing and Experiments Quantifying Diffusion within Lipid Bilayers.

    PubMed

    Block, Stephan

    2018-05-22

    The capability of lipid bilayers to exhibit fluid-phase behavior is a fascinating property, which enables, for example, membrane-associated components, such as lipids (domains) and transmembrane proteins, to diffuse within the membrane. These diffusion processes are of paramount importance for cells, as they are for example involved in cell signaling processes or the recycling of membrane components, but also for recently developed analytical approaches, which use differences in the mobility for certain analytical purposes, such as in-membrane purification of membrane proteins or the analysis of multivalent interactions. Here, models describing the Brownian motion of membrane inclusions (lipids, peptides, proteins, and complexes thereof) in model bilayers (giant unilamellar vesicles, black lipid membranes, supported lipid bilayers) are summarized and model predictions are compared with the available experimental data, thereby allowing for evaluating the validity of the introduced models. It will be shown that models describing the diffusion in freestanding (Saffman-Delbrück and Hughes-Pailthorpe-White model) and supported bilayers (the Evans-Sackmann model) are well supported by experiments, though only few experimental studies have been published so far for the latter case, calling for additional tests to reach the same level of experimental confirmation that is currently available for the case of freestanding bilayers.

  2. Using Vision Metrology System for Quality Control in Automotive Industries

    NASA Astrophysics Data System (ADS)

    Mostofi, N.; Samadzadegan, F.; Roohy, Sh.; Nozari, M.

    2012-07-01

    The need of more accurate measurements in different stages of industrial applications, such as designing, producing, installation, and etc., is the main reason of encouraging the industry deputy in using of industrial Photogrammetry (Vision Metrology System). With respect to the main advantages of Photogrammetric methods, such as greater economy, high level of automation, capability of noncontact measurement, more flexibility and high accuracy, a good competition occurred between this method and other industrial traditional methods. With respect to the industries that make objects using a main reference model without having any mathematical model of it, main problem of producers is the evaluation of the production line. This problem will be so complicated when both reference and product object just as a physical object is available and comparison of them will be possible with direct measurement. In such case, producers make fixtures fitting reference with limited accuracy. In practical reports sometimes available precision is not better than millimetres. We used a non-metric high resolution digital camera for this investigation and the case study that studied in this paper is a chassis of automobile. In this research, a stable photogrammetric network designed for measuring the industrial object (Both Reference and Product) and then by using the Bundle Adjustment and Self-Calibration methods, differences between the Reference and Product object achieved. These differences will be useful for the producer to improve the production work flow and bringing more accurate products. Results of this research, demonstrate the high potential of proposed method in industrial fields. Presented results prove high efficiency and reliability of this method using RMSE criteria. Achieved RMSE for this case study is smaller than 200 microns that shows the fact of high capability of implemented approach.

  3. Application of genetic algorithm for the simultaneous identification of atmospheric pollution sources

    NASA Astrophysics Data System (ADS)

    Cantelli, A.; D'Orta, F.; Cattini, A.; Sebastianelli, F.; Cedola, L.

    2015-08-01

    A computational model is developed for retrieving the positions and the emission rates of unknown pollution sources, under steady state conditions, starting from the measurements of the concentration of the pollutants. The approach is based on the minimization of a fitness function employing a genetic algorithm paradigm. The model is tested considering both pollutant concentrations generated through a Gaussian model in 25 points in a 3-D test case domain (1000m × 1000m × 50 m) and experimental data such as the Prairie Grass field experiments data in which about 600 receptors were located along five concentric semicircle arcs and the Fusion Field Trials 2007. The results show that the computational model is capable to efficiently retrieve up to three different unknown sources.

  4. Predicting language diversity with complex networks

    PubMed Central

    Gubiec, Tomasz

    2018-01-01

    We analyze the model of social interactions with coevolution of the topology and states of the nodes. This model can be interpreted as a model of language change. We propose different rewiring mechanisms and perform numerical simulations for each. Obtained results are compared with the empirical data gathered from two online databases and anthropological study of Solomon Islands. We study the behavior of the number of languages for different system sizes and we find that only local rewiring, i.e. triadic closure, is capable of reproducing results for the empirical data in a qualitative manner. Furthermore, we cancel the contradiction between previous models and the Solomon Islands case. Our results demonstrate the importance of the topology of the network, and the rewiring mechanism in the process of language change. PMID:29702699

  5. Parallel ALLSPD-3D: Speeding Up Combustor Analysis Via Parallel Processing

    NASA Technical Reports Server (NTRS)

    Fricker, David M.

    1997-01-01

    The ALLSPD-3D Computational Fluid Dynamics code for reacting flow simulation was run on a set of benchmark test cases to determine its parallel efficiency. These test cases included non-reacting and reacting flow simulations with varying numbers of processors. Also, the tests explored the effects of scaling the simulation with the number of processors in addition to distributing a constant size problem over an increasing number of processors. The test cases were run on a cluster of IBM RS/6000 Model 590 workstations with ethernet and ATM networking plus a shared memory SGI Power Challenge L workstation. The results indicate that the network capabilities significantly influence the parallel efficiency, i.e., a shared memory machine is fastest and ATM networking provides acceptable performance. The limitations of ethernet greatly hamper the rapid calculation of flows using ALLSPD-3D.

  6. A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad

    2016-09-01

    Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.

  7. Application of constraint-based satellite mission planning model in forest fire monitoring

    NASA Astrophysics Data System (ADS)

    Guo, Bingjun; Wang, Hongfei; Wu, Peng

    2017-10-01

    In this paper, a constraint-based satellite mission planning model is established based on the thought of constraint satisfaction. It includes target, request, observation, satellite, payload and other elements, with constraints linked up. The optimization goal of the model is to make full use of time and resources, and improve the efficiency of target observation. Greedy algorithm is used in the model solving to make observation plan and data transmission plan. Two simulation experiments are designed and carried out, which are routine monitoring of global forest fire and emergency monitoring of forest fires in Australia. The simulation results proved that the model and algorithm perform well. And the model is of good emergency response capability. Efficient and reasonable plan can be worked out to meet users' needs under complex cases of multiple payloads, multiple targets and variable priorities with this model.

  8. Thermal loading of natural streams

    USGS Publications Warehouse

    Jackman, Alan P.; Yotsukura, Nobuhiro

    1977-01-01

    The impact of thermal loading on the temperature regime of natural streams is investigated by mathematical models, which describe both transport (convection-diffusion) and decay (surface dissipation) of waste heat over 1-hour or shorter time intervals. The models are derived from the principle of conservation of thermal energy for application to one- and two-dimensional spaces. The basic concept in these models is to separate water temperature into two parts, (1) excess temperature due to thermal loading and (2) natural (ambient) temperature. This separation allows excess temperature to be calculated from the models without incoming radiation data. Natural temperature may either be measured in prototypes or calculated from the model. If use is made of the model, however, incoming radiation is required as input data. Comparison of observed and calculated temperatures in seven natural streams shows that the models are capable of predicting transient temperature regimes satisfactorily in most cases. (Woodard-USGS)

  9. Advancing Climate Change and Impacts Science Through Climate Informatics

    NASA Astrophysics Data System (ADS)

    Lenhardt, W.; Pouchard, L. C.; King, A. W.; Branstetter, M. L.; Kao, S.; Wang, D.

    2010-12-01

    This poster will outline the work to date on developing a climate informatics capability at Oak Ridge National Laboratory (ORNL). The central proposition of this effort is that the application of informatics and information science to the domain of climate change science is an essential means to bridge the realm of high performance computing (HPC) and domain science. The goal is to facilitate knowledge capture and the creation of new scientific insights. For example, a climate informatics capability will help with the understanding and use of model results in domain sciences that were not originally in the scope. From there, HPC can also benefit from feedback as the new approaches may lead to better parameterization in the models. In this poster we will summarize the challenges associated with climate change science that can benefit from the systematic application of informatics and we will highlight our work to date in creating the climate informatics capability to address these types of challenges. We have identified three areas that are particularly challenging in the context of climate change science: 1) integrating model and observational data across different spatial and temporal scales, 2) model linkages, i.e. climate models linked to other models such as hydrologic models, and 3) model diagnostics. Each of these has a methodological component and an informatics component. Our project under way at ORNL seeks to develop new approaches and tools in the context of linking climate change and water issues. We are basing our work on the following four use cases: 1) Evaluation/test of CCSM4 biases in hydrology (precipitation, soil water, runoff, river discharge) over the Rio Grande Basin. User: climate modeler. 2) Investigation of projected changes in hydrology of Rio Grande Basin using the VIC (Variable Infiltration Capacity Macroscale) Hydrologic Model. User: watershed hydrologist/modeler. 3) Impact of climate change on agricultural productivity of the Rio Grande Basin. User: climate impact scientist, agricultural economist. 4) Renegotiation of the 1944 “Treaty for the Utilization of Waters of the Colorado and Tijuana Rivers and of the Rio Grande”. User: A US State Department analyst or their counterpart in Mexico.

  10. Genetic programming based quantitative structure-retention relationships for the prediction of Kovats retention indices.

    PubMed

    Goel, Purva; Bapat, Sanket; Vyas, Renu; Tambe, Amruta; Tambe, Sanjeev S

    2015-11-13

    The development of quantitative structure-retention relationships (QSRR) aims at constructing an appropriate linear/nonlinear model for the prediction of the retention behavior (such as Kovats retention index) of a solute on a chromatographic column. Commonly, multi-linear regression and artificial neural networks are used in the QSRR development in the gas chromatography (GC). In this study, an artificial intelligence based data-driven modeling formalism, namely genetic programming (GP), has been introduced for the development of quantitative structure based models predicting Kovats retention indices (KRI). The novelty of the GP formalism is that given an example dataset, it searches and optimizes both the form (structure) and the parameters of an appropriate linear/nonlinear data-fitting model. Thus, it is not necessary to pre-specify the form of the data-fitting model in the GP-based modeling. These models are also less complex, simple to understand, and easy to deploy. The effectiveness of GP in constructing QSRRs has been demonstrated by developing models predicting KRIs of light hydrocarbons (case study-I) and adamantane derivatives (case study-II). In each case study, two-, three- and four-descriptor models have been developed using the KRI data available in the literature. The results of these studies clearly indicate that the GP-based models possess an excellent KRI prediction accuracy and generalization capability. Specifically, the best performing four-descriptor models in both the case studies have yielded high (>0.9) values of the coefficient of determination (R(2)) and low values of root mean squared error (RMSE) and mean absolute percent error (MAPE) for training, test and validation set data. The characteristic feature of this study is that it introduces a practical and an effective GP-based method for developing QSRRs in gas chromatography that can be gainfully utilized for developing other types of data-driven models in chromatography science. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. The SCEC Community Modeling Environment (SCEC/CME) - An Overview of its Architecture and Current Capabilities

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Minster, B.; Moore, R.; Kesselman, C.; SCEC ITR Collaboration

    2004-12-01

    The Southern California Earthquake Center (SCEC), in collaboration with the San Diego Supercomputer Center, the USC Information Sciences Institute, the Incorporated Research Institutions for Seismology, and the U.S. Geological Survey, is developing the Southern California Earthquake Center Community Modeling Environment (CME) under a five-year grant from the National Science Foundation's Information Technology Research (ITR) Program jointly funded by the Geosciences and Computer and Information Science & Engineering Directorates. The CME system is an integrated geophysical simulation modeling framework that automates the process of selecting, configuring, and executing models of earthquake systems. During the Project's first three years, we have performed fundamental geophysical and information technology research and have also developed substantial system capabilities, software tools, and data collections that can help scientist perform systems-level earthquake science. The CME system provides collaborative tools to facilitate distributed research and development. These collaborative tools are primarily communication tools, providing researchers with access to information in ways that are convenient and useful. The CME system provides collaborators with access to significant computing and storage resources. The computing resources of the Project include in-house servers, Project allocations on USC High Performance Computing Linux Cluster, as well as allocations on NPACI Supercomputers and the TeraGrid. The CME system provides access to SCEC community geophysical models such as the Community Velocity Model, Community Fault Model, Community Crustal Motion Model, and the Community Block Model. The organizations that develop these models often provide access to them so it is not necessary to use the CME system to access these models. However, in some cases, the CME system supplements the SCEC community models with utility codes that make it easier to use or access these models. In some cases, the CME system also provides alternatives to the SCEC community models. The CME system hosts a collection of community geophysical software codes. These codes include seismic hazard analysis (SHA) programs developed by the SCEC/USGS OpenSHA group. Also, the CME system hosts anelastic wave propagation codes including Kim Olsen's Finite Difference code and Carnegie Mellon's Hercules Finite Element tool chain. The CME system can execute a workflow, that is, a series of geophysical computations using the output of one processing step as the input to a subsequent step. Our workflow capability utilizes grid-based computing software that can submit calculations to a pool of computing resources as well as data management tools that help us maintain an association between data files and metadata descriptions of those files. The CME system maintains, and provides access to, a collection of valuable geophysical data sets. The current CME Digital Library holdings include a collection of 60 ground motion simulation results calculated by a SCEC/PEER working group and a collection of Greens Functions calculated for 33 TriNet broadband receiver sites in the Los Angeles area.

  12. An Integrated Software Package to Enable Predictive Simulation Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Fitzhenry, Erin B.; Jin, Shuangshuang

    The power grid is increasing in complexity due to the deployment of smart grid technologies. Such technologies vastly increase the size and complexity of power grid systems for simulation and modeling. This increasing complexity necessitates not only the use of high-performance-computing (HPC) techniques, but a smooth, well-integrated interplay between HPC applications. This paper presents a new integrated software package that integrates HPC applications and a web-based visualization tool based on a middleware framework. This framework can support the data communication between different applications. Case studies with a large power system demonstrate the predictive capability brought by the integrated software package,more » as well as the better situational awareness provided by the web-based visualization tool in a live mode. Test results validate the effectiveness and usability of the integrated software package.« less

  13. Bridging the Gap Between NASA Earth Observations and Decision Makers Through the NASA Develop National Program

    NASA Astrophysics Data System (ADS)

    Remillard, C. M.; Madden, M.; Favors, J.; Childs-Gleason, L.; Ross, K. W.; Rogers, L.; Ruiz, M. L.

    2016-06-01

    The NASA DEVELOP National Program bridges the gap between NASA Earth Science and society by building capacity in both participants and partner organizations that collaborate to conduct projects. These rapid feasibility projects highlight the capabilities of satellite and aerial Earth observations. Immersion of decision and policy makers in these feasibility projects increases awareness of the capabilities of Earth observations and contributes to the tools and resources available to support enhanced decision making. This paper will present the DEVELOP model, best practices, and two case studies, the Colombia Ecological Forecasting project and the Miami-Dade County Ecological Forecasting project, that showcase the successful adoption of tools and methods for decision making. Through over 90 projects each year, DEVELOP is always striving for the innovative, practical, and beneficial use of NASA Earth science data.

  14. Anisotropic composite human skull model and skull fracture validation against temporo-parietal skull fracture.

    PubMed

    Sahoo, Debasis; Deck, Caroline; Yoganandan, Narayan; Willinger, Rémy

    2013-12-01

    A composite material model for skull, taking into account damage is implemented in the Strasbourg University finite element head model (SUFEHM) in order to enhance the existing skull mechanical constitutive law. The skull behavior is validated in terms of fracture patterns and contact forces by reconstructing 15 experimental cases. The new SUFEHM skull model is capable of reproducing skull fracture precisely. The composite skull model is validated not only for maximum forces, but also for lateral impact against actual force time curves from PMHS for the first time. Skull strain energy is found to be a pertinent parameter to predict the skull fracture and based on statistical (binary logistical regression) analysis it is observed that 50% risk of skull fracture occurred at skull strain energy of 544.0mJ. © 2013 Elsevier Ltd. All rights reserved.

  15. Bioprinting towards Physiologically Relevant Tissue Models for Pharmaceutics.

    PubMed

    Peng, Weijie; Unutmaz, Derya; Ozbolat, Ibrahim T

    2016-09-01

    Improving the ability to predict the efficacy and toxicity of drug candidates earlier in the drug discovery process will speed up the introduction of new drugs into clinics. 3D in vitro systems have significantly advanced the drug screening process as 3D tissue models can closely mimic native tissues and, in some cases, the physiological response to drugs. Among various in vitro systems, bioprinting is a highly promising technology possessing several advantages such as tailored microarchitecture, high-throughput capability, coculture ability, and low risk of cross-contamination. In this opinion article, we discuss the currently available tissue models in pharmaceutics along with their limitations and highlight the possibilities of bioprinting physiologically relevant tissue models, which hold great potential in drug testing, high-throughput screening, and disease modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Hybrid algorithms for fuzzy reverse supply chain network design.

    PubMed

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.

  17. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    PubMed Central

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  18. Advances in Geologic Disposal System Modeling and Shale Reference Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mariner, Paul E.; Stein, Emily R.; Frederick, Jennifer M.

    The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (OFCT) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high level nuclear waste (HLW). Two high priorities for SFWST disposal R&D are design concept development and disposal system modeling (DOE 2011, Table 6). These priorities are directly addressed in the SFWST Generic Disposal Systems Analysis (GDSA) work package, which is charged with developing a disposal system modeling and analysis capability for evaluating disposal system performance formore » nuclear waste in geologic media (e.g., salt, granite, shale, and deep borehole disposal).« less

  19. The influence of national policy change on subnational policymaking: Medicaid nursing facility reimbursement in the American states.

    PubMed

    Miller, Edward Alan; Wang, Lili

    2009-06-01

    This study proposes that exogenous shocks emanating from national governments can significantly change health policy processes among subnational units. The relevance of this insight for comparative health policy research is examined in the context of Medicaid nursing facility reimbursement policymaking in the American states. Event history techniques are used to model state adoption of case-mix methods for reimbursing nursing homes under Medicaid from 1980 to 2004. Case-mix adjusts Medicaid nursing home payments for patient acuity, thereby enabling states to pay more for residents with higher care needs and to pay less for residents with lower care needs. The goal is to improve access for more resource intensive Medicaid beneficiaries and to distribute payments more equitably across the providers who serve them. The most noteworthy national policy changes affecting case-mix implementation by state governments were adoption of nursing home quality reform with the Omnibus Budget Reconciliation Act (OBRA) of 1987 and case-mix by Medicare with the Balanced Budget Act (BBA) of 1997. In light of the 1990 and 1999 implementation of OBRA 1987 and the BBA, respectively, five models were estimated, which in addition to covering the entire time period studied (1980-2004) include pre-/post-BBA comparisons (1980-1998, 1999-2004) and pre-/post-OBRA 1987 comparisons (1980-1989, 1990-1998). Results suggest that in contrast to early adoption, which tended to be grounded in the capabilities of innovative states, later adoption tended to take place among less capable states influenced more by the changing federal policy environment. They also highlight the salience of programmatic and fiscal conditions but during the middle of the adoption cycle only. Future research should clarify the ways in which national policy changes influence health policy adoption at the subnational level, both in other nations and across different levels of government.

  20. Fluid-structure interaction in the left ventricle of the human heart coupled with mitral valve

    NASA Astrophysics Data System (ADS)

    Meschini, Valentina; de Tullio, Marco Donato; Querzoli, Giorgio; Verzicco, Roberto

    2016-11-01

    In this paper Direct Numerical Simulations (DNS), implemented using a fully fluid-structure interaction model for the left ventricle, the mitral valve and the flowing blood, and laboratory experiments are performed in order to cross validate the results. Moreover a parameter affecting the flow dynamics is the presence of a mitral valve. We model two cases, one with a natural mitral valve and another with a prosthetic mechanical one. Our aim is to understand their different effects on the flow inside the left ventricle in order to better investigate the process of valve replacement. We simulate two situations, one of a healthy left ventricle and another of a failing one. While in the first case the flow reaches the apex of the left ventricle and washout the stagnant fluid with both mechanical and natural valve, in the second case the disturbance generated by the mechanical leaflets destabilizes the mitral jet, thus further decreasing its capability to penetrate the ventricular region and originating heart attack or cardiac pathologies in general.

  1. Toward fidelity between specification and implementation

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.; Morrison, Jeff; Wu, Yunqing

    1994-01-01

    This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.

  2. Verification and validation of a reliable multicast protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.

  3. Implementation and validation of a wake model for vortex-surface interactions in low speed forward flight

    NASA Technical Reports Server (NTRS)

    Komerath, Narayanan M.; Schreiber, Olivier A.

    1987-01-01

    The wake model was implemented using a VAX 750 and a Microvax II workstation. Online graphics capability using a DISSPLA graphics package. The rotor model used by Beddoes was significantly extended to include azimuthal variations due to forward flight and a simplified scheme for locating critical points where vortex elements are placed. A test case was obtained for validation of the predictions of induced velocity. Comparison of the results indicates that the code requires some more features before satisfactory predictions can be made over the whole rotor disk. Specifically, shed vorticity due to the azimuthal variation of blade loading must be incorporated into the model. Interactions between vortices shed from the four blades of the model rotor must be included. The Scully code for calculating the velocity field is being modified in parallel with these efforts to enable comparison with experimental data. To date, some comparisons with flow visualization data obtained at Georgia Tech were performed and show good agreement for the isolated rotor case. Comparison of time-resolved velocity data obtained at Georgia Tech also shows good agreement. Modifications are being implemented to enable generation of time-averaged results for comparison with NASA data.

  4. A strategy to apply machine learning to small datasets in materials science

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Ling, Chen

    2018-12-01

    There is growing interest in applying machine learning techniques in the research of materials science. However, although it is recognized that materials datasets are typically smaller and sometimes more diverse compared to other fields, the influence of availability of materials data on training machine learning models has not yet been studied, which prevents the possibility to establish accurate predictive rules using small materials datasets. Here we analyzed the fundamental interplay between the availability of materials data and the predictive capability of machine learning models. Instead of affecting the model precision directly, the effect of data size is mediated by the degree of freedom (DoF) of model, resulting in the phenomenon of association between precision and DoF. The appearance of precision-DoF association signals the issue of underfitting and is characterized by large bias of prediction, which consequently restricts the accurate prediction in unknown domains. We proposed to incorporate the crude estimation of property in the feature space to establish ML models using small sized materials data, which increases the accuracy of prediction without the cost of higher DoF. In three case studies of predicting the band gap of binary semiconductors, lattice thermal conductivity, and elastic properties of zeolites, the integration of crude estimation effectively boosted the predictive capability of machine learning models to state-of-art levels, demonstrating the generality of the proposed strategy to construct accurate machine learning models using small materials dataset.

  5. An Evolving Model for Capacity Building with Earth Observation Imagery

    NASA Astrophysics Data System (ADS)

    Sylak-Glassman, E. J.

    2015-12-01

    For the first forty years of Earth observation satellite imagery, all imagery was collected by civilian or military governmental satellites. Over this timeframe, countries without observation satellite capabilities had very limited access to Earth observation data or imagery. In response to the limited access to Earth observation systems, capacity building efforts were focused on satellite manufacturing. Wood and Weigel (2012) describe the evolution of satellite programs in developing countries with a technology ladder. A country moves up the ladder as they move from producing satellites with training services to building satellites locally. While the ladder model may be appropriate if the goal is to develop autonomous satellite manufacturing capability, in the realm of Earth observation, the goal is generally to derive societal benefit from the use of Earth observation-derived information. In this case, the model for developing Earth observation capacity is more appropriately described by a hub-and-spoke model in which the use of Earth observation imagery is the "hub," and the "spokes" describe the various paths to achieving that imagery: the building of a satellite (either independently or with assistance), the purchase of a satellite, participation in a constellation of satellites, and the use of freely available or purchased satellite imagery. We discuss the different capacity-building activities that are conducted in each of these pathways, such as the "Know-How Transfer and Training" program developed by Surrey Satellite Technology Ltd. , Earth observation imagery training courses run by SERVIR in developing countries, and the use of national or regional remote sensing centers (such as those in Morocco, Malaysia, and Kenya) to disseminate imagery and training. In addition, we explore the factors that determine through which "spoke" a country arrives at the ability to use Earth observation imagery, and discuss best practices for achieving the capability to use imagery.

  6. A diffuse interface model of grain boundary faceting

    NASA Astrophysics Data System (ADS)

    Abdeljawad, F.; Medlin, D. L.; Zimmerman, J. A.; Hattar, K.; Foiles, S. M.

    2016-06-01

    Interfaces, free or internal, greatly influence the physical properties and stability of materials microstructures. Of particular interest are the processes that occur due to anisotropic interfacial properties. In the case of grain boundaries (GBs) in metals, several experimental observations revealed that an initially flat GB may facet into hill-and-valley structures with well defined planes and corners/edges connecting them. Herein, we present a diffuse interface model that is capable of accounting for strongly anisotropic GB properties and capturing the formation of hill-and-valley morphologies. The hallmark of our approach is the ability to independently examine the various factors affecting GB faceting and subsequent facet coarsening. More specifically, our formulation incorporates higher order expansions to account for the excess energy due to facet junctions and their non-local interactions. As a demonstration of the modeling capability, we consider the Σ5 <001 > tilt GB in body-centered-cubic iron, where faceting along the {210} and {310} planes was experimentally observed. Atomistic calculations were utilized to determine the inclination-dependent GB energy, which was then used as an input in our model. Linear stability analysis and simulation results highlight the role of junction energy and associated non-local interactions on the resulting facet length scales. Broadly speaking, our modeling approach provides a general framework to examine the microstructural stability of polycrystalline systems with highly anisotropic GBs.

  7. Rapid effective trace-back capability value: a case study of foot-and-mouth in the Texas High Plains.

    PubMed

    Hagerman, Amy D; Ward, Michael P; Anderson, David P; Looney, J Chris; McCarl, Bruce A

    2013-07-01

    In this study our aim was to value the benefits of rapid effective trace-back capability-based on a livestock identification system - in the event of a foot and mouth disease (FMD) outbreak. We simulated an FMD outbreak in the Texas High Plains, an area of high livestock concentration, beginning in a large feedlot. Disease spread was simulated under different time dependent animal tracing scenarios. In the specific scenario modeled (incursion of FMD within a large feedlot, detection within 14 days and 90% effective tracing), simulation suggested that control costs of the outbreak significantly increase if tracing does not occur until day 10 as compared to the baseline of tracing on day 2. In addition, control costs are significantly increased if effectiveness were to drop to 30% as compared to the baseline of 90%. Results suggest potential benefits from rapid effective tracing in terms of reducing government control costs; however, a variety of other scenarios need to be explored before determining in which situations rapid effective trace-back capability is beneficial. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Advances in Scientific Balloon Thermal Modeling

    NASA Technical Reports Server (NTRS)

    Bohaboj, T.; Cathey, H. M., Jr.

    2004-01-01

    The National Aeronautics and Space Administration's Balloon Program office has long acknowledged that the accurate modeling of balloon performance and flight prediction is dependant on how well the balloon is thermally modeled. This ongoing effort is focused on developing accurate balloon thermal models that can be used to quickly predict balloon temperatures and balloon performance. The ability to model parametric changes is also a driver for this effort. This paper will present the most recent advances made in this area. This research effort continues to utilize the "Thrmal Desktop" addition to AUTO CAD for the modeling. Recent advances have been made by using this analytical tool. A number of analyses have been completed to test the applicability of this tool to the problem with very positive results. Progressively detailed models have been developed to explore the capabilities of the tool as well as to provide guidance in model formulation. A number of parametric studies have been completed. These studies have varied the shape of the structure, material properties, environmental inputs, and model geometry. These studies have concentrated on spherical "proxy models" for the initial development stages and then to transition to the natural shaped zero pressure and super pressure balloons. An assessment of required model resolution has also been determined. Model solutions have been cross checked with known solutions via hand calculations. The comparison of these cases will also be presented. One goal is to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed full models. This papa presents the step by step advances made as part of this effort, capabilities, limitations, and the lessons learned. Also presented are the plans for further thermal modeling work.

  9. An overview of the National Earthquake Information Center acquisition software system, Edge/Continuous Waveform Buffer

    USGS Publications Warehouse

    Patton, John M.; Ketchum, David C.; Guy, Michelle R.

    2015-11-02

    This document provides an overview of the capabilities, design, and use cases of the data acquisition and archiving subsystem at the U.S. Geological Survey National Earthquake Information Center. The Edge and Continuous Waveform Buffer software supports the National Earthquake Information Center’s worldwide earthquake monitoring mission in direct station data acquisition, data import, short- and long-term data archiving, data distribution, query services, and playback, among other capabilities. The software design and architecture can be configured to support acquisition and (or) archiving use cases. The software continues to be developed in order to expand the acquisition, storage, and distribution capabilities.

  10. Comprehensive modeling and control of flexible flapping wing micro air vehicles

    NASA Astrophysics Data System (ADS)

    Nogar, Stephen Michael

    Flapping wing micro air vehicles hold significant promise due to the potential for improved aerodynamic efficiency, enhanced maneuverability and hover capability compared to fixed and rotary configurations. However, significant technical challenges exist to due the lightweight, highly integrated nature of the vehicle and coupling between the actuators, flexible wings and control system. Experimental and high fidelity analysis has demonstrated that aeroelastic effects can change the effective kinematics of the wing, reducing vehicle stability. However, many control studies for flapping wing vehicles do not consider these effects, and instead validate the control strategy with simple assumptions, including rigid wings, quasi-steady aerodynamics and no consideration of actuator dynamics. A control evaluation model that includes aeroelastic effects and actuator dynamics is developed. The structural model accounts for geometrically nonlinear behavior using an implicit condensation technique and the aerodynamic loads are found using a time accurate approach that includes quasi-steady, rotational, added mass and unsteady effects. Empirically based parameters in the model are fit using data obtained from a higher fidelity solver. The aeroelastic model and its ingredients are compared to experiments and computations using models of higher fidelity, and indicate reasonable agreement. The developed control evaluation model is implemented in a previously published, baseline controller that maintains stability using an asymmetric wingbeat, known as split-cycle, along with changing the flapping frequency and wing bias. The model-based controller determines the control inputs using a cycle-averaged, linear control design model, which assumes a rigid wing and no actuator dynamics. The introduction of unaccounted for dynamics significantly degrades the ability of the controller to track a reference trajectory, and in some cases destabilizes the vehicle. This demonstrates the importance of considering coupled aeroelastic and actuator dynamics in closed-loop control of flapping wings. A controller is developed that decouples the normal form of the vehicle dynamics, which accounts for coupling of the forces and moments acting on the vehicle and enables enhanced tuning capabilities. This controller, using the same control design model as the baseline controller, stabilizes the system despite the uncertainty between the control design and evaluation models. The controller is able to stabilize cases with significant wing flexibility and limited actuator capabilities, despite a reduction in control effectiveness. Additionally, to achieve a minimally actuated vehicle, the wing bias mechanism is removed. Using the same control design methodology, increased performance is observed compared to the baseline controller. However, due to the dependence on the split-cycle mechanism to generate a pitching moment instead of wing bias, the controller is more susceptible to instability from wing flexibility and limited actuator capacity. This work highlights the importance of coupled dynamics in the design and control of flapping wing micro air vehicles. Future enhancements to this work should focus on the reduced order structural and aerodynamics models. Applications include using the developed dynamics model to evaluate other kinematics and control schemes, ultimately enabling improved vehicle and control design.

  11. The C-27J Spartan Procurement Program: A Case Study in USAF Sourcing Practices for National Security

    DTIC Science & Technology

    2012-06-15

    9  Figure 2: Mobility System Utilization by MCRS Case...into the viability of other missions. Mobility Capabilities and Requirements Study 2016 The Mobility Capabilities and Requirements Study 2016 ( MCRS -16...the second since 9-11, and it was released in February 2010 using the programmed force in the 2009 President’s Budget (PB09). The MCRS -16 Executive

  12. A comparison of analysis tools for predicting the inelastic cyclic response of cross-ply titanium matrix composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroupa, J.L.; Coker, D.; Neu, R.W.

    1996-12-31

    Several micromechanical models that are currently being used for predicting the thermal and mechanical behavior of a cross-ply, [0/90], titanium matrix composite are evaluated. Six computer programs or methods are compared: (1) VISCOPLY; (2) METCAN; (3) FIDEP, an enhanced concentric cylinder model; (4) LISOL, a modified method of cells approach; (5) an elementary approach where the [90] ply is assumed to have the same properties as the matrix; and (6) a finite element method. Comparisons are made for the thermal residual stresses at room temperature resulting from processing, as well as for stresses and strains in two isothermal and twomore » thermomechanical fatigue test cases. For each case, the laminate response of the models is compared to experimental behavior, while the responses of the constituents are compared among the models. The capability of each model to predict frequency effects, inelastic cyclic strain (hysteresis) behavior, and strain ratchetting with cycling is shown. The basis of formulation for the micromechanical models, the constitutive relationships used for the matrix and fiber, and the modeling technique of the [90] ply are all found to be important factors for determining the accurate behavior of the [0/90] composite.« less

  13. Kinematic models of the upper limb joints for multibody kinematics optimisation: An overview.

    PubMed

    Duprey, Sonia; Naaim, Alexandre; Moissenet, Florent; Begon, Mickaël; Chèze, Laurence

    2017-09-06

    Soft tissue artefact (STA), i.e. the motion of the skin, fat and muscles gliding on the underlying bone, may lead to a marker position error reaching up to 8.7cm for the particular case of the scapula. Multibody kinematics optimisation (MKO) is one of the most efficient approaches used to reduce STA. It consists in minimising the distance between the positions of experimental markers on a subject skin and the simulated positions of the same markers embedded on a kinematic model. However, the efficiency of MKO directly relies on the chosen kinematic model. This paper proposes an overview of the different upper limb models available in the literature and a discussion about their applicability to MKO. The advantages of each joint model with respect to its biofidelity to functional anatomy are detailed both for the shoulder and the forearm areas. Models capabilities of personalisation and of adaptation to pathological cases are also discussed. Concerning model efficiency in terms of STA reduction in MKO algorithms, a lack of quantitative assessment in the literature is noted. In priority, future studies should concern the evaluation and quantification of STA reduction depending on upper limb joint constraints. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Capturing the Energy Absorbing Mechanisms of Composite Structures under Crash Loading

    NASA Astrophysics Data System (ADS)

    Wade, Bonnie

    As fiber reinforced composite material systems become increasingly utilized in primary aircraft and automotive structures, the need to understand their contribution to the crashworthiness of the structure is of great interest to meet safety certification requirements. The energy absorbing behavior of a composite structure, however, is not easily predicted due to the great complexity of the failure mechanisms that occur within the material. Challenges arise both in the experimental characterization and in the numerical modeling of the material/structure combination. At present, there is no standardized test method to characterize the energy absorbing capability of composite materials to aide crashworthy structural design. In addition, although many commercial finite element analysis codes exist and offer a means to simulate composite failure initiation and propagation, these models are still under development and refinement. As more metallic structures are replaced by composite structures, the need for both experimental guidelines to characterize the energy absorbing capability of a composite structure, as well as guidelines for using numerical tools to simulate composite materials in crash conditions has become a critical matter. This body of research addresses both the experimental characterization of the energy absorption mechanisms occurring in composite materials during crushing, as well as the numerical simulation of composite materials undergoing crushing. In the experimental investigation, the specific energy absorption (SEA) of a composite material system is measured using a variety of test element geometries, such as corrugated plates and tubes. Results from several crush experiments reveal that SEA is not a constant material property for laminated composites, and varies significantly with the geometry of the test specimen used. The variation of SEA measured for a single material system requires that crush test data must be generated for a range of different test geometries in order to define the range of its energy absorption capability. Further investigation from the crush tests has led to the development of a direct link between geometric features of the crush specimen and its resulting SEA. Through micrographic analysis, distinct failure modes are shown to be guided by the geometry of the specimen, and subsequently are shown to directly influence energy absorption. A new relationship between geometry, failure mode, and SEA has been developed. This relationship has allowed for the reduction of the element-level crush testing requirement to characterize the composite material energy absorption capability. In the numerical investigation, the LS-DYNA composite material model MAT54 is selected for its suitability to model composite materials beyond failure determination, as required by crush simulation, and its capability to remain within the scope of ultimately using this model for large-scale crash simulation. As a result of this research, this model has been thoroughly investigated in depth for its capacity to simulate composite materials in crush, and results from several simulations of the element-level crush experiments are presented. A modeling strategy has been developed to use MAT54 for crush simulation which involves using the experimental data collected from the coupon- and element-level crush tests to directly calibrate the crush damage parameter in MAT54 such that it may be used in higher-level simulations. In addition, the source code of the material model is modified to improve upon its capability. The modifications include improving the elastic definition such that the elastic response to multi-axial load cases can be accurately portrayed simultaneously in each element, which is a capability not present in other composite material models. Modifications made to the failure determination and post-failure model have newly emphasized the post-failure stress degradation scheme rather than the failure criterion which is traditionally considered the most important composite material model definition for crush simulation. The modification efforts have also validated the use of the MAT54 failure criterion and post-failure model for crash modeling when its capabilities and limitations are well understood, and for this reason guidelines for using MAT54 for composite crush simulation are presented. This research has effectively (a) developed and demonstrated a procedure that defines a set of experimental crush results that characterize the energy absorption capability of a composite material system, (b) used the experimental results in the development and refinement of a composite material model for crush simulation, (c) explored modifying the material model to improve its use in crush modeling, and (d) provided experimental and modeling guidelines for composite structures under crush at the element-level in the scope of the Building Block Approach.

  15. An Assessment of Current Fan Noise Prediction Capability

    NASA Technical Reports Server (NTRS)

    Envia, Edmane; Woodward, Richard P.; Elliott, David M.; Fite, E. Brian; Hughes, Christopher E.; Podboy, Gary G.; Sutliff, Daniel L.

    2008-01-01

    In this paper, the results of an extensive assessment exercise carried out to establish the current state of the art for predicting fan noise at NASA are presented. Representative codes in the empirical, analytical, and computational categories were exercised and assessed against a set of benchmark acoustic data obtained from wind tunnel tests of three model scale fans. The chosen codes were ANOPP, representing an empirical capability, RSI, representing an analytical capability, and LINFLUX, representing a computational aeroacoustics capability. The selected benchmark fans cover a wide range of fan pressure ratios and fan tip speeds, and are representative of modern turbofan engine designs. The assessment results indicate that the ANOPP code can predict fan noise spectrum to within 4 dB of the measurement uncertainty band on a third-octave basis for the low and moderate tip speed fans except at extreme aft emission angles. The RSI code can predict fan broadband noise spectrum to within 1.5 dB of experimental uncertainty band provided the rotor-only contribution is taken into account. The LINFLUX code can predict interaction tone power levels to within experimental uncertainties at low and moderate fan tip speeds, but could deviate by as much as 6.5 dB outside the experimental uncertainty band at the highest tip speeds in some case.

  16. Shape optimization of three-dimensional stamped and solid automotive components

    NASA Technical Reports Server (NTRS)

    Botkin, M. E.; Yang, R.-J.; Bennett, J. A.

    1987-01-01

    The shape optimization of realistic, 3-D automotive components is discussed. The integration of the major parts of the total process: modeling, mesh generation, finite element and sensitivity analysis, and optimization are stressed. Stamped components and solid components are treated separately. For stamped parts a highly automated capability was developed. The problem description is based upon a parameterized boundary design element concept for the definition of the geometry. Automatic triangulation and adaptive mesh refinement are used to provide an automated analysis capability which requires only boundary data and takes into account sensitivity of the solution accuracy to boundary shape. For solid components a general extension of the 2-D boundary design element concept has not been achieved. In this case, the parameterized surface shape is provided using a generic modeling concept based upon isoparametric mapping patches which also serves as the mesh generator. Emphasis is placed upon the coupling of optimization with a commercially available finite element program. To do this it is necessary to modularize the program architecture and obtain shape design sensitivities using the material derivative approach so that only boundary solution data is needed.

  17. Capability of simultaneous Rayleigh LiDAR and O2 airglow measurements in exploring the short period wave characteristics

    NASA Astrophysics Data System (ADS)

    Taori, Alok; Raghunath, Karnam; Jayaraman, Achuthan

    We use combination of simultaneous measurements made with Rayleigh lidar and O2 airglow monitoring to improve lidar investigation capability to cover a higher altitude range. We feed instantaneous O2 airglow temperatures instead the model values at the top altitude for subsequent integration method of temperature retrieval using Rayleigh lidar back scattered signals. Using this method, errors in the lidar temperature estimates converges at higher altitudes indicating better altitude coverage compared to regular methods where model temperatures are used instead of real-time measurements. This improvement enables the measurements of short period waves at upper mesospheric altitudes (~90 km). With two case studies, we show that above 60 km the few short period wave amplitude drastically increases while, some of the short period wave show either damping or saturation. We claim that by using such combined measurements, a significant and cost effective progress can be made in the understanding of short period wave processes which are important for the coupling across the different atmospheric regions.

  18. Overview of Experimental Capabilities - Supersonics

    NASA Technical Reports Server (NTRS)

    Banks, Daniel W.

    2007-01-01

    This viewgraph presentation gives an overview of experimental capabilities applicable to the area of supersonic research. The contents include: 1) EC Objectives; 2) SUP.11: Elements; 3) NRA; 4) Advanced Flight Simulator Flexible Aircraft Simulation Studies; 5) Advanced Flight Simulator Flying Qualities Guideline Development for Flexible Supersonic Transport Aircraft; 6) Advanced Flight Simulator Rigid/Flex Flight Control; 7) Advanced Flight Simulator Rapid Sim Model Exchange; 8) Flight Test Capabilities Advanced In-Flight Infrared (IR) Thermography; 9) Flight Test Capabilities In-Flight Schlieren; 10) Flight Test Capabilities CLIP Flow Calibration; 11) Flight Test Capabilities PFTF Flowfield Survey; 12) Ground Test Capabilities Laser-Induced Thermal Acoustics (LITA); 13) Ground Test Capabilities Doppler Global Velocimetry (DGV); 14) Ground Test Capabilities Doppler Global Velocimetry (DGV); and 15) Ground Test Capabilities EDL Optical Measurement Capability (PIV) for Rigid/Flexible Decelerator Models.

  19. A GIS-based approach for comparative analysis of potential fire risk assessment

    NASA Astrophysics Data System (ADS)

    Sun, Ying; Hu, Lieqiu; Liu, Huiping

    2007-06-01

    Urban fires are one of the most important sources of property loss and human casualty and therefore it is necessary to assess the potential fire risk with consideration of urban community safety. Two evaluation models are proposed, both of which are integrated with GIS. One is the single factor model concerning the accessibility of fire passage and the other is grey clustering approach based on the multifactor system. In the latter model, fourteen factors are introduced and divided into four categories involving security management, evacuation facility, construction resistance and fire fighting capability. A case study on campus of Beijing Normal University is presented to express the potential risk assessment models in details. A comparative analysis of the two models is carried out to validate the accuracy. The results are approximately consistent with each other. Moreover, modeling with GIS promotes the efficiency the potential risk assessment.

  20. Active and Passive 3D Vector Radiative Transfer with Preferentially-Aligned Ice Particles

    NASA Astrophysics Data System (ADS)

    Adams, I. S.; Munchak, S. J.; Pelissier, C.; Kuo, K. S.; Heymsfield, G. M.

    2017-12-01

    To support the observation of clouds and precipitation using combinations of radars and radiometers, a forward model capable of representing diverse sensing geometries for active and passive instruments is necessary for correctly interpreting and consistently combining multi-sensor measurements from ground-based, airborne, and spaceborne platforms. As such, the Atmospheric Radiative Transfer Simulator (ARTS) uses Monte Carlo integration to produce radar reflectivities and radiometric brightness temperatures for three-dimensional cloud and precipitation input fields. This radiative transfer framework is capable of efficiently sampling Gaussian antenna beams and fully accounting for multiple scattering. By relying on common ray-tracing tools, gaseous absorption models, and scattering properties, the model reproduces accurate and consistent radar and radiometer observables. While such a framework is an important component for simulating remote sensing observables, the key driver for self-consistent radiative transfer calculations of clouds and precipitation is scattering data. Research over the past decade has demonstrated that spheroidal models of frozen hydrometeors cannot accurately reproduce all necessary scattering properties at all desired frequencies. The discrete dipole approximation offers flexibility in calculating scattering for arbitrary particle geometries, but at great computational expense. When considering scattering for certain pristine ice particles, the Extended Boundary Condition Method, or T-Matrix, is much more computationally efficient; however, convergence for T-Matrix calculations fails at large size parameters and high aspect ratios. To address these deficiencies, we implemented the Invariant Imbedding T-Matrix Method (IITM). A brief overview of ARTS and IITM will be given, including details for handling preferentially-aligned hydrometeors. Examples highlighting the performance of the model for simulating space-based and airborne measurements will be offered, and some case studies showing the response to particle type and orientation will be presented. Simulations of polarized radar (Z, LDR, ZDR) and radiometer (Stokes I and Q) quantities will be used to demonstrate the capabilities of the model.

  1. Application of a multi-block CFD code to investigate the impact of geometry modeling on centrifugal compressor flow field predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hathaway, M.D.; Wood, J.R.

    1997-10-01

    CFD codes capable of utilizing multi-block grids provide the capability to analyze the complete geometry of centrifugal compressors. Attendant with this increased capability is potentially increased grid setup time and more computational overhead with the resultant increase in wall clock time to obtain a solution. If the increase in difficulty of obtaining a solution significantly improves the solution from that obtained by modeling the features of the tip clearance flow or the typical bluntness of a centrifugal compressor`s trailing edge, then the additional burden is worthwhile. However, if the additional information obtained is of marginal use, then modeling of certainmore » features of the geometry may provide reasonable solutions for designers to make comparative choices when pursuing a new design. In this spirit a sequence of grids were generated to study the relative importance of modeling versus detailed gridding of the tip gap and blunt trailing edge regions of the NASA large low-speed centrifugal compressor for which there is considerable detailed internal laser anemometry data available for comparison. The results indicate: (1) There is no significant difference in predicted tip clearance mass flow rate whether the tip gap is gridded or modeled. (2) Gridding rather than modeling the trailing edge results in better predictions of some flow details downstream of the impeller, but otherwise appears to offer no great benefits. (3) The pitchwise variation of absolute flow angle decreases rapidly up to 8% impeller radius ratio and much more slowly thereafter. Although some improvements in prediction of flow field details are realized as a result of analyzing the actual geometry there is no clear consensus that any of the grids investigated produced superior results in every case when compared to the measurements. However, if a multi-block code is available, it should be used, as it has the propensity for enabling better predictions than a single block code.« less

  2. High Performance Hydrometeorological Modeling, Land Data Assimilation and Parameter Estimation with the Land Information System at NASA/GSFC

    NASA Astrophysics Data System (ADS)

    Peters-Lidard, C. D.; Kumar, S. V.; Santanello, J. A.; Tian, Y.; Rodell, M.; Mocko, D.; Reichle, R.

    2008-12-01

    The Land Information System (LIS; http://lis.gsfc.nasa.gov; Kumar et al., 2006; Peters-Lidard et al., 2007) is a flexible land surface modeling framework that has been developed with the goal of integrating satellite- and ground-based observational data products and advanced land surface modeling techniques to produce optimal fields of land surface states and fluxes. The LIS software was the co-winner of NASA's 2005 Software of the Year award. LIS facilitates the integration of observations from Earth-observing systems and predictions and forecasts from Earth System and Earth science models into the decision-making processes of partnering agency and national organizations. Due to its flexible software design, LIS can serve both as a Problem Solving Environment (PSE) for hydrologic research to enable accurate global water and energy cycle predictions, and as a Decision Support System (DSS) to generate useful information for application areas including disaster management, water resources management, agricultural management, numerical weather prediction, air quality and military mobility assessment. LIS has evolved from two earlier efforts - North American Land Data Assimilation System (NLDAS; Mitchell et al. 2004) and Global Land Data Assimilation System (GLDAS; Rodell et al. 2004) that focused primarily on improving numerical weather prediction skills by improving the characterization of the land surface conditions. Both of these systems, now use specific configurations of the LIS software in their current implementations. LIS not only consolidates the capabilities of these two systems, but also enables a much larger variety of configurations with respect to horizontal spatial resolution, input datasets and choice of land surface model through 'plugins'. In addition to these capabilities, LIS has also been demonstrated for parameter estimation (Peters-Lidard et al., 2008; Santanello et al., 2007) and data assimilation (Kumar et al., 2008). Examples and case studies demonstrating the capabilities and impacts of LIS for hydrometeorological modeling, land data assimilation and parameter estimation will be presented.

  3. Integrated spatiotemporal characterization of dust sources and outbreaks in Central and East Asia

    NASA Astrophysics Data System (ADS)

    Darmenova, Kremena T.

    The potential of atmospheric dust aerosols to modify the Earth's environment and climate has been recognized for some time. However, predicting the diverse impact of dust has several significant challenges. One is to quantify the complex spatial and temporal variability of dust burden in the atmosphere. Another is to quantify the fraction of dust originating from human-made sources. This thesis focuses on the spatiotemporal characterization of sources and dust outbreaks in Central and East Asia by integrating ground-based data, satellite multisensor observations, and modeling. A new regional dust modeling system capable of operating over a span of scales was developed. The modeling system consists of a dust module DuMo, which incorporates several dust emission schemes of different complexity, and the PSU/NCAR mesoscale model MM5, which offers a variety of physical parameterizations and flexible nesting capability. The modeling system was used to perform for the first time a comprehensive study of the timing, duration, and intensity of individual dust events in Central and East Asia. Determining the uncertainties caused by the choice of model physics, especially the boundary layer parameterization, and the dust production scheme was the focus of our study. Implications to assessments of the anthropogenic dust fraction in these regions were also addressed. Focusing on Spring 2001, an analysis of routine surface meteorological observations and satellite multi-sensor data was carried out in conjunction with modeling to determine the extent to which integrated data set can be used to characterize the spatiotemporal distribution of dust plumes at a range of temporal scales, addressing the active dust sources in China and Mongolia, mid-range transport and trans-Pacific, long-range transport of dust outbreaks on a case-by-case basis. This work demonstrates that adequate and consistent characterization of individual dust events is central to establishing a reliable climatology, ultimately leading to improved assessments of dust impacts on the environment and climate. This will also help to identify the appropriate temporal and spatial scales for adequate intercomparison between model results and observational data as well as for developing an integrated analysis methodology for dust studies.

  4. An improved wrapper-based feature selection method for machinery fault diagnosis

    PubMed Central

    2017-01-01

    A major issue of machinery fault diagnosis using vibration signals is that it is over-reliant on personnel knowledge and experience in interpreting the signal. Thus, machine learning has been adapted for machinery fault diagnosis. The quantity and quality of the input features, however, influence the fault classification performance. Feature selection plays a vital role in selecting the most representative feature subset for the machine learning algorithm. In contrast, the trade-off relationship between capability when selecting the best feature subset and computational effort is inevitable in the wrapper-based feature selection (WFS) method. This paper proposes an improved WFS technique before integration with a support vector machine (SVM) model classifier as a complete fault diagnosis system for a rolling element bearing case study. The bearing vibration dataset made available by the Case Western Reserve University Bearing Data Centre was executed using the proposed WFS and its performance has been analysed and discussed. The results reveal that the proposed WFS secures the best feature subset with a lower computational effort by eliminating the redundancy of re-evaluation. The proposed WFS has therefore been found to be capable and efficient to carry out feature selection tasks. PMID:29261689

  5. Toxic release consequence analysis tool (TORCAT) for inherently safer design plant.

    PubMed

    Shariff, Azmi Mohd; Zaini, Dzulkarnain

    2010-10-15

    Many major accidents due to toxic release in the past have caused many fatalities such as the tragedy of MIC release in Bhopal, India (1984). One of the approaches is to use inherently safer design technique that utilizes inherent safety principle to eliminate or minimize accidents rather than to control the hazard. This technique is best implemented in preliminary design stage where the consequence of toxic release can be evaluated and necessary design improvements can be implemented to eliminate or minimize the accidents to as low as reasonably practicable (ALARP) without resorting to costly protective system. However, currently there is no commercial tool available that has such capability. This paper reports on the preliminary findings on the development of a prototype tool for consequence analysis and design improvement via inherent safety principle by utilizing an integrated process design simulator with toxic release consequence analysis model. The consequence analysis based on the worst-case scenarios during process flowsheeting stage were conducted as case studies. The preliminary finding shows that toxic release consequences analysis tool (TORCAT) has capability to eliminate or minimize the potential toxic release accidents by adopting the inherent safety principle early in preliminary design stage. 2010 Elsevier B.V. All rights reserved.

  6. Thermal modeling of grinding for process optimization and durability improvements

    NASA Astrophysics Data System (ADS)

    Hanna, Ihab M.

    Both thermal and mechanical aspects of the grinding process are investigated in detail in an effort to predict grinding induced residual stresses. An existing thermal model is used as a foundation for computing heat partitions and temperatures in surface grinding. By numerically processing data from IR temperature measurements of the grinding zone; characterizations are made of the grinding zone heat flux. It is concluded that the typical heat flux profile in the grinding zone is triangular in shape, supporting this often used assumption found in the literature. Further analyses of the computed heat flux profiles has revealed that actual grinding zone contact lengths exceed geometric contact lengths by an average of 57% for the cases considered. By integrating the resulting heat flux profiles; workpiece energy partitions are computed for several cases of dry conventional grinding of hardened steel. The average workpiece energy partition for the cases considered was 37%. In an effort to more accurately predict grinding zone temperatures and heat fluxes, refinements are made to the existing thermal model. These include consideration of contact length extensions due to local elastic deformations, variations of the assumed contact area ratio as a function of grinding process parameters, consideration of coolant latent heat of vaporization and its effect on heat transfer beyond the coolant boiling point, and incorporation of coolant-workpiece convective heat flux effects outside the grinding zone. The result of the model refinements accounting for contact length extensions and process-dependant contact area ratios is excellent agreement with IR temperature measurements over a wide range of grinding conditions. By accounting for latent heat of vaporization effects, grinding zone temperature profiles are shown to be capable of reproducing measured profiles found in the literature for cases on the verge of thermal surge conditions. Computed peak grinding zone temperatures for the aggressive grinding examples given are 30--50% lower than those computed using the existing thermal model formulation. By accounting for convective heat transfer effects outside the grinding zone, it is shown that while surface temperatures in the wake of the grinding zone may be significantly affected under highly convective conditions, computed residual stresses are less sensitive to such conditions. Numerical models are used to evaluate both thermally and mechanically induced stress fields in an elastic workpiece, while finite element modeling is used to evaluate residual stresses for workpieces with elastic-plastic material properties. Modeling of mechanical interactions at the local grit-workpiece length scale is used to create the often measured effect of compressive surface residual stress followed by a subsurface tensile peak. The model is shown to be capable of reproducing trends found in the literature of surface residual stresses which are compressive for low temperature grinding conditions, with surface stresses increasing linearly and becoming tensile with increasing temperatures. Further modifications to the finite element model are made to allow for transiently varying inputs for more complicated grinding processes of industrial components such as automotive cam lobes.

  7. Numerical model (switchable/dual model) of the human head for rigid body and finite elements applications.

    PubMed

    Tabacu, Stefan

    2015-01-01

    In this paper, a methodology for the development and validation of a numerical model of the human head using generic procedures is presented. All steps required, starting with the model generation, model validation and applications will be discussed. The proposed model may be considered as a dual one due to its capabilities to switch from deformable to a rigid body according to the application's requirements. The first step is to generate the numerical model of the human head using geometry files or medical images. The required stiffness and damping for the elastic connection used for the rigid body model are identified by performing a natural frequency analysis. The presented applications for model validation are related to impact analysis. The first case is related to Nahum's (Nahum and Smith 1970) experiments pressure data being evaluated and a pressure map generated using the results from discrete elements. For the second case, the relative displacement between the brain and the skull is evaluated according to Hardy's (Hardy WH, Foster CD, Mason, MJ, Yang KH, King A, Tashman S. 2001.Investigation of head injury mechanisms using neutral density technology and high-speed biplanar X-ray. Stapp Car Crash J. 45:337-368, SAE Paper 2001-22-0016) experiments. The main objective is to validate the rigid model as a quick and versatile tool for acquiring the input data for specific brain analyses.

  8. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE PAGES

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik; ...

    2017-10-06

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  9. Flow adjustment inside homogeneous canopies after a leading edge – An analytical approach backed by LES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, Konstantin; Banerjee, Tirtha; De Roo, Frederik

    A two-dimensional analytical model for describing the mean flow behavior inside a vegetation canopy after a leading edge in neutral conditions was developed and tested by means of large eddy simulations (LES) employing the LES code PALM. The analytical model is developed for the region directly after the canopy edge, the adjustment region, where one-dimensional canopy models fail due to the sharp change in roughness. The derivation of this adjustment region model is based on an analytic solution of the two-dimensional Reynolds averaged Navier–Stokes equation in neutral conditions for a canopy with constant plant area density (PAD). The main assumptionsmore » for solving the governing equations are separability of the velocity components concerning the spatial variables and the neglection of the Reynolds stress gradients. These two assumptions are verified by means of LES. To determine the emerging model parameters, a simultaneous fitting scheme was applied to the velocity and pressure data of a reference LES simulation. Furthermore a sensitivity analysis of the adjustment region model, equipped with the previously calculated parameters, was performed varying the three relevant length, the canopy height ( h), the canopy length and the adjustment length ( Lc), in additional LES. Even if the model parameters are, in general, functions of h/ Lc, it was found out that the model is capable of predicting the flow quantities in various cases, when using constant parameters. Subsequently the adjustment region model is combined with the one-dimensional model of Massman, which is applicable for the interior of the canopy, to attain an analytical model capable of describing the mean flow for the full canopy domain. As a result, the model is tested against an analytical model based on a linearization approach.« less

  10. A Highly Capable Year 6 Student's Response to a Challenging Mathematical Task

    ERIC Educational Resources Information Center

    Livy, Sharyn; Holmes, Marilyn; Ingram, Naomi; Linsell, Chris; Sullivan, Peter

    2016-01-01

    Highly capable mathematics students are not usually considered strugglers. This paper reports on a case study of a Year 6 student, Debbie, her response to a lesson, and her learning involving a challenging mathematical task. Debbie, usually a highly capable student, struggled to complete a challenging mathematical task by herself, but as the…

  11. Representative Structural Element - A New Paradigm for Multi-Scale Structural Modeling

    DTIC Science & Technology

    2016-07-05

    developed by NASA Glenn Research Center based on Aboudi’s micromechanics theories [5] that provides a wide range of capabilities for modeling ...to use appropriate models for related problems based on the capability of corresponding approaches. Moreover, the analyses will give a general...interface of heterogeneous materials but also help engineers to use appropriate models for related problems based on the capability of corresponding

  12. Organizational Learning, Strategic Flexibility and Business Model Innovation: An Empirical Research Based on Logistics Enterprises

    NASA Astrophysics Data System (ADS)

    Bao, Yaodong; Cheng, Lin; Zhang, Jian

    Using the data of 237 Jiangsu logistics firms, this paper empirically studies the relationship among organizational learning capability, business model innovation, strategic flexibility. The results show as follows; organizational learning capability has positive impacts on business model innovation performance; strategic flexibility plays mediating roles on the relationship between organizational learning capability and business model innovation; interaction among strategic flexibility, explorative learning and exploitative learning play significant roles in radical business model innovation and incremental business model innovation.

  13. Impact of using scatterometer and altimeter data on storm surge forecasting

    NASA Astrophysics Data System (ADS)

    Bajo, Marco; De Biasio, Francesco; Umgiesser, Georg; Vignudelli, Stefano; Zecchetto, Stefano

    2017-05-01

    Satellite data are rarely used in storm surge models because of the lack of established methodologies. Nevertheless, they can provide useful information on surface wind and sea level, which can potentially improve the forecast. In this paper satellite wind data are used to correct the bias of wind originating from a global atmospheric model, while satellite sea level data are used to improve the initial conditions of the model simulations. In a first step, the capability of global winds (biased and unbiased) to adequately force a storm surge model are assessed against that of a high resolution local wind. Then, the added value of direct assimilation of satellite altimeter data in the storm surge model is tested. Eleven storm surge events, recorded in Venice from 2008 to 2012, are simulated using different configurations of wind forcing and altimeter data assimilation. Focusing on the maximum surge peak, results show that the relative error, averaged over the eleven cases considered, decreases from 13% to 7%, using both the unbiased wind and assimilating the altimeter data, while, if the high resolution local wind is used to force the hydrodynamic model, the altimeter data assimilation reduces the error from 9% to 6%. Yet, the overall capabilities in reproducing the surge in the first day of forecast, measured by the correlation and by the rms error, improve only with the use of the unbiased global wind and not with the use of high resolution local wind and altimeter data assimilation.

  14. Latino Families' Experiences with Autism Services: Disparities, Capabilities, and Occupational Justice

    PubMed Central

    Angell, Amber M.; Frank, Gelya; Solomon, Olga

    2016-01-01

    Rationale This article examines six cases of publicly-funded Applied Behavior Analysis (ABA) therapy for Latino children with autism spectrum disorder (ASD) in order to contribute to thinking about occupational justice. Objective We consider in each case 1) how the families' experiences can be understood occupationally; 2) how ABA affected the functionings and capabilities of the children and their families; and 3) how the parents' accounts relate to occupational justice. Methodology This is an ethnographic study of six Latino families of children with ASD in Los Angeles County. Findings All families were offered ABA for their children, but five families experienced occupational challenges leading them to insist on modifications of ABA or opt out of the service. Conclusion Applying the capabilities approach can help to operationalize the concept of occupational justice as a tool to evaluate social policy across cases. PMID:27585604

  15. On the Conditioning of Machine-Learning-Assisted Turbulence Modeling

    NASA Astrophysics Data System (ADS)

    Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng

    2017-11-01

    Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.

  16. Laser dimpling process parameters selection and optimization using surrogate-driven process capability space

    NASA Astrophysics Data System (ADS)

    Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz

    2017-08-01

    Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii) optimize process parameters under competing quality requirements such as maximizing the dimple height while minimizing the dimple lower surface area.

  17. TARANIS XGRE and IDEE detection capability of terrestrial gamma-ray flashes and associated electron beams

    NASA Astrophysics Data System (ADS)

    Sarria, David; Lebrun, Francois; Blelly, Pierre-Louis; Chipaux, Remi; Laurent, Philippe; Sauvaud, Jean-Andre; Prech, Lubomir; Devoto, Pierre; Pailot, Damien; Baronick, Jean-Pierre; Lindsey-Clark, Miles

    2017-07-01

    With a launch expected in 2018, the TARANIS microsatellite is dedicated to the study of transient phenomena observed in association with thunderstorms. On board the spacecraft, XGRE and IDEE are two instruments dedicated to studying terrestrial gamma-ray flashes (TGFs) and associated terrestrial electron beams (TEBs). XGRE can detect electrons (energy range: 1 to 10 MeV) and X- and gamma-rays (energy range: 20 keV to 10 MeV) with a very high counting capability (about 10 million counts per second) and the ability to discriminate one type of particle from another. The IDEE instrument is focused on electrons in the 80 keV to 4 MeV energy range, with the ability to estimate their pitch angles. Monte Carlo simulations of the TARANIS instruments, using a preliminary model of the spacecraft, allow sensitive area estimates for both instruments. This leads to an averaged effective area of 425 cm2 for XGRE, used to detect X- and gamma-rays from TGFs, and the combination of XGRE and IDEE gives an average effective area of 255 cm2 which can be used to detect electrons/positrons from TEBs. We then compare these performances to RHESSI, AGILE and Fermi GBM, using data extracted from literature for the TGF case and with the help of Monte Carlo simulations of their mass models for the TEB case. Combining this data with the help of the MC-PEPTITA Monte Carlo simulations of TGF propagation in the atmosphere, we build a self-consistent model of the TGF and TEB detection rates of RHESSI, AGILE and Fermi. It can then be used to estimate that TARANIS should detect about 200 TGFs yr-1 and 25 TEBs yr-1.

  18. Magnetopause Standoff Position Changes and Geosynchronous Orbit Crossings: Models and Observations

    NASA Astrophysics Data System (ADS)

    Collado-Vega, Y. M.; Rastaetter, L.; Sibeck, D. G.

    2017-12-01

    The Earth's magnetopause is the boundary that mostly separates the solar wind with the Earth's magnetosphere. Its location has been studied and estimated via simulation models, observational data and empirical models. This research aims to study the changes of the magnetopause standoff location due to different solar wind conditions using a combination of all the different methods. We will use the Run-On-Request capabilities within the MHD models available from the Community Coordinated Modeling Center (CCMC) at NASA Goddard Space Flight Center, specifically BATS-R-US (SWMF), OpenGGCM, LFM and GUMICS models. The magnetopause standoff position prediction and response time to the solar wind changes will then be compared to results from available empirical models (e.g. Shue et al. 1998), and to THEMIS, Cluster, Geotail and MMS missions magnetopause crossing observations. We will also use times of extreme solar wind conditions where magnetopause crossings have been observed by the GOES satellites. Rigorous analysis/comparison of observations and empirical models is critical in determining magnetosphere dynamics for model validation. This research goes also hand in hand with the efforts of the working group at the CCMC/LWS International Forum for Space Weather Capabilities Assessment workshop that aims to analyze different events to define metrics for model-data comparison. Preliminary results of this particular research show that there are some discrepancies between the MHD models standoff positions of the dayside magnetopause for the same solar wind conditions that include an increase in solar wind dynamic pressure and a step function in the IMF Bz component. In cases of nominal solar wind conditions, it has been observed that the models do mostly agree with the observational data from the different satellite missions.

  19. Coupled Stochastic Time-Inverted Lagrangian Transport/Weather Forecast and Research/Vegetation Photosynthesis and Respiration Model. Part II; Simulations of Tower-Based and Airborne CO2 Measurements

    NASA Technical Reports Server (NTRS)

    Eluszkiewicz, Janusz; Nehrkorn, Thomas; Wofsy, Steven C.; Matross, Daniel; Gerbig, Christoph; Lin, John C.; Freitas, Saulo; Longo, Marcos; Andrews, Arlyn E.; Peters, Wouter

    2007-01-01

    This paper evaluates simulations of atmospheric CO2 measured in 2004 at continental surface and airborne receptors, intended to test the capability to use data with high temporal and spatial resolution for analyses of carbon sources and sinks at regional and continental scales. The simulations were performed using the Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by the Weather Forecast and Research (WRF) model, and linked to surface fluxes from the satellite-driven Vegetation Photosynthesis and Respiration Model (VPRM). The simulations provide detailed representations of hourly CO2 tower data and reproduce the shapes of airborne vertical profiles with high fidelity. WRF meteorology gives superior model performance compared with standard meteorological products, and the impact of including WRF convective mass fluxes in the STILT trajectory calculations is significant in individual cases. Important biases in the simulation are associated with the nighttime CO2 build-up and subsequent morning transition to convective conditions, and with errors in the advected lateral boundary condition. Comparison of STILT simulations driven by the WRF model against those driven by the Brazilian variant of the Regional Atmospheric Modeling System (BRAMS) shows that model-to-model differences are smaller than between an individual transport model and observations, pointing to systematic errors in the simulated transport. Future developments in the WRF model s data assimilation capabilities, basic research into the fundamental aspects of trajectory calculations, and intercomparison studies involving other transport models, are possible venues for reducing these errors. Overall, the STILT/WRF/VPRM offers a powerful tool for continental and regional scale carbon flux estimates.

  20. Dichotomy between the band and hopping transport in organic crystals: insights from experiments.

    PubMed

    Yavuz, I

    2017-10-04

    The molecular understanding of charge-transport in organic crystals has often been tangled with identifying the true dynamical origin. While in two distinct cases where complete delocalization and localization of charge-carriers are associated with band-like and hopping-like transports, respectively, their possible coalescence poses some mystery. Moreover, the existing models are still controversial at ambient temperatures. Here, we review the issues in charge-transport theories of organic materials and then provide an overview of prominent transport models. We explored ∼60 organic crystals, the single-crystal hole/electron mobilities of which have been predicted by band-like and hopping-like transport models, separately. Our comparative results show that at room-temperature neither of the models are exclusively capable of accurately predicting mobilities in a very broad range. Hopping-like models well-predict experimental mobilities around μ ∼ 1 cm 2 V -1 s -1 but systematically diverge at high mobilities. Similarly, band-like models are good at μ > ∼50 cm 2 V -1 s -1 but systematically diverge at lower mobilities. These results suggest the development of a unique and robust room-temperature transport model incorporating a mixture of these two extreme cases, whose relative importance is associated with their predominant regions. We deduce that while band models are beneficial for rationally designing high mobility organic-semiconductors, hopping models are good to elucidate the charge-transport of most organic-semiconductors.

  1. Meta-analysis in Stata using gllamm.

    PubMed

    Bagos, Pantelis G

    2015-12-01

    There are several user-written programs for performing meta-analysis in Stata (Stata Statistical Software: College Station, TX: Stata Corp LP). These include metan, metareg, mvmeta, and glst. However, there are several cases for which these programs do not suffice. For instance, there is no software for performing univariate meta-analysis with correlated estimates, for multilevel or hierarchical meta-analysis, or for meta-analysis of longitudinal data. In this work, we show with practical applications that many disparate models, including but not limited to the ones mentioned earlier, can be fitted using gllamm. The software is very versatile and can handle a wide variety of models with applications in a wide range of disciplines. The method presented here takes advantage of these modeling capabilities and makes use of appropriate transformations, based on the Cholesky decomposition of the inverse of the covariance matrix, known as generalized least squares, in order to handle correlated data. The models described earlier can be thought of as special instances of a general linear mixed-model formulation, but to the author's knowledge, a general exposition in order to incorporate all the available models for meta-analysis as special cases and the instructions to fit them in Stata has not been presented so far. Source code is available at http:www.compgen.org/tools/gllamm. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Lagrangian-averaged model for magnetohydrodynamic turbulence and the absence of bottlenecks.

    PubMed

    Pietarila Graham, Jonathan; Mininni, Pablo D; Pouquet, Annick

    2009-07-01

    We demonstrate that, for the case of quasiequipartition between the velocity and the magnetic field, the Lagrangian-averaged magnetohydrodynamics (LAMHD) alpha model reproduces well both the large-scale and the small-scale properties of turbulent flows; in particular, it displays no increased (superfilter) bottleneck effect with its ensuing enhanced energy spectrum at the onset of the subfilter scales. This is in contrast to the case of the neutral fluid in which the Lagrangian-averaged Navier-Stokes alpha model is somewhat limited in its applications because of the formation of spatial regions with no internal degrees of freedom and subsequent contamination of superfilter-scale spectral properties. We argue that, as the Lorentz force breaks the conservation of circulation and enables spectrally nonlocal energy transfer (associated with Alfvén waves), it is responsible for the absence of a viscous bottleneck in magnetohydrodynamics (MHD), as compared to the fluid case. As LAMHD preserves Alfvén waves and the circulation properties of MHD, there is also no (superfilter) bottleneck found in LAMHD, making this method capable of large reductions in required numerical degrees of freedom; specifically, we find a reduction factor of approximately 200 when compared to a direct numerical simulation on a large grid of 1536;{3} points at the same Reynolds number.

  3. Return on Investment (ROI) Framework Case Study: CTH.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corro, Janna L.

    CTH is a Eulerian code developed at Sandia National Laboratories capable of modeling the hydrodynamic response of explosives, liquids, gases, and solids. The code solves complex multi-dimensional problems characterized by large deformations and strong shocks that are composed of various material configurations. CTH includes models for material strength, fracture, porosity, and high explosive detonation and initiation. The code is an acronym for a complex series of names relating to its origin. A full explanation can be seen in Appendix A. The software breaks penetration simulations into millions of grid-like “cells”. As a modeled projectile impacts and penetrates a target, progressivelymore » smaller blocks of cells are placed around the projectile, which show in detail deformations and breakups. Additionally, the code is uniquely suited to modeling blunt impact and blast loading leading to human body injury.« less

  4. Modeling new production in upwelling centers - A case study of modeling new production from remotely sensed temperature and color

    NASA Technical Reports Server (NTRS)

    Dugdale, Richard C.; Wilkerson, Frances P.; Morel, Andre; Bricaud, Annick

    1989-01-01

    A method has been developed for estimating new production in upwelling systems from remotely sensed surface temperatures. A shift-up model predicts the rate of adaptation of nitrate uptake. The time base for the production cycle is obtained from a knowledge of surface heating rates and differences in temperature between the point of upwelling and each pixel. Nitrate concentrations are obtained from temperature-nitrate regression equations. The model was developed for the northwest Africa upwelling region, where shipboard measurements of new production were available. It can be employed in two modes, the first using only surface temperatures, and the second in which CZCS color data are incorporated. The major advance offered by this method is the capability to estimate new production on spatial and time scales inaccessible with shipboard approaches.

  5. Combining Computational Fluid Dynamics and Agent-Based Modeling: A New Approach to Evacuation Planning

    PubMed Central

    Epstein, Joshua M.; Pankajakshan, Ramesh; Hammond, Ross A.

    2011-01-01

    We introduce a novel hybrid of two fields—Computational Fluid Dynamics (CFD) and Agent-Based Modeling (ABM)—as a powerful new technique for urban evacuation planning. CFD is a predominant technique for modeling airborne transport of contaminants, while ABM is a powerful approach for modeling social dynamics in populations of adaptive individuals. The hybrid CFD-ABM method is capable of simulating how large, spatially-distributed populations might respond to a physically realistic contaminant plume. We demonstrate the overall feasibility of CFD-ABM evacuation design, using the case of a hypothetical aerosol release in Los Angeles to explore potential effectiveness of various policy regimes. We conclude by arguing that this new approach can be powerfully applied to arbitrary population centers, offering an unprecedented preparedness and catastrophic event response tool. PMID:21687788

  6. Exploring Integration in Action: Competencies as Building Blocks of Expertise.

    PubMed

    Mylopoulos, Maria; Borschel, Debaroti Tina; O'Brien, Tara; Martimianakis, Sofia; Woods, Nicole N

    2017-12-01

    Competency frameworks such as the CanMEDS roles and the ACGME core competencies may lead to the implicit assumption that physicians can learn and practice individual competencies in isolation. In contrast, models of adaptive expertise suggest that the integration of competencies reflects the capabilities of an expert physician. Thus, educational programming aimed at teaching discrete roles or competencies might overlook expert physician capabilities that are central to patient care. To develop expertise, learning opportunities must reflect expert capabilities. To better understand the relationship between competency-based medical education and expert development, the authors sought to explore how integrated competencies are enacted during patient care by postgraduate medical trainees. Using a cognitive ethnographic approach, in 2014-2015 the authors conducted observations and-to refine and elaborate these observations-ad hoc informal interviews with 13 postgraduate trainee participants. Data collection resulted in 92 hours of observation, 26 patient case portraits, and a total of 220 pages of field notes for analysis. Through analysis, the authors identified and examined moments when postgraduate trainees appeared to be simultaneously enacting multiple competencies. The authors identified two key expert capabilities in moments of integrated competence: finding complexity and being patient-centered. They described two mechanisms for these forms of integration: valuing the patient's narrative of their illness, and integrated understanding. Understanding integrated competencies as the building blocks of expert capabilities, along with recognizing the importance of mechanisms that support integration, offers an opportunity to use existing competency-based frameworks to understand and teach adaptive expertise.

  7. Coupling socioeconomic and lake systems for sustainability: a conceptual analysis using Lake St. Clair region as a case study.

    PubMed

    Mavrommati, Georgia; Baustian, Melissa M; Dreelin, Erin A

    2014-04-01

    Applying sustainability at an operational level requires understanding the linkages between socioeconomic and natural systems. We identified linkages in a case study of the Lake St. Clair (LSC) region, part of the Laurentian Great Lakes system. Our research phases included: (1) investigating and revising existing coupled human and natural systems frameworks to develop a framework for this case study; (2) testing and refining the framework by hosting a 1-day stakeholder workshop and (3) creating a causal loop diagram (CLD) to illustrate the relationships among the systems' key components. With stakeholder assistance, we identified four interrelated pathways that include water use and discharge, land use, tourism and shipping that impact the ecological condition of LSC. The interrelationships between the pathways of water use and tourism are further illustrated by a CLD with several feedback loops. We suggest that this holistic approach can be applied to other case studies and inspire the development of dynamic models capable of informing decision making for sustainability.

  8. Experimental study of the intraventricular filling vortex in diastolic dysfunction

    NASA Astrophysics Data System (ADS)

    Santhanakrishnan, Arvind; Samaee, Milad; Nelsen, Nicholas

    2016-11-01

    Heart failure with normal ejection fraction (HFNEF) is a clinical syndrome that is prevalent in over half of heart failure patients. HFNEF patients typically show diastolic dysfunction, caused by a decrease in relaxation capability of the left ventricular (LV) muscle tissue and/or an increase in LV chamber stiffness. Numerous studies using non-invasive medical imaging have shown that an intraventricular filling vortex is formed in the LV during diastole. We conducted 2D particle image velocimetry and hemodynamics measurements on a left heart simulator to investigate diastolic flow under increasing LV wall stiffness, LV wall thickness and heart rate (HR) conditions. Flexible-walled, optically clear LV physical models cast from silicone were fitted within a fluid-filled acrylic chamber. Pulsatile flow within the LV model was generated using a piston pump and 2-component Windkessel elements were used to tune the least stiff (baseline) LV model to physiological conditions. The results show that peak circulation of the intraventricular filling vortex is diminished in conditions of diastolic dysfunction as compared to the baseline case. Increasing HR exacerbated the circulation of the filling vortex across all cases.

  9. A numerical model of two-phase flow at the micro-scale using the volume-of-fluid method

    NASA Astrophysics Data System (ADS)

    Shams, Mosayeb; Raeini, Ali Q.; Blunt, Martin J.; Bijeljic, Branko

    2018-03-01

    This study presents a simple and robust numerical scheme to model two-phase flow in porous media where capillary forces dominate over viscous effects. The volume-of-fluid method is employed to capture the fluid-fluid interface whose dynamics is explicitly described based on a finite volume discretization of the Navier-Stokes equations. Interfacial forces are calculated directly on reconstructed interface elements such that the total curvature is preserved. The computed interfacial forces are explicitly added to the Navier-Stokes equations using a sharp formulation which effectively eliminates spurious currents. The stability and accuracy of the implemented scheme is validated on several two- and three-dimensional test cases, which indicate the capability of the method to model two-phase flow processes at the micro-scale. In particular we show how the co-current flow of two viscous fluids leads to greatly enhanced flow conductance for the wetting phase in corners of the pore space, compared to a case where the non-wetting phase is an inviscid gas.

  10. Transforming Roving-Rolling Explorer (TRREx) for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Edwin, Lionel Ernest

    All planetary surface exploration missions thus far have employed traditional rovers with a rocker-bogie suspension. These rovers can navigate moderately rough and flat terrain, but are not designed to traverse rugged terrain with steep slopes. The fact is, however, that many scientifically interesting missions require exploration platforms with capabilities for navigating such types of chaotic terrain. This issue motivates the development of new kinds of rovers that take advantage of the latest advances in robotic technologies to traverse rugged terrain efficiently. This dissertation proposes and analyses one such rover concept called the Transforming Roving-Rolling Explorer (TRREx) that is principally aimed at addressing the above issue. Biologically inspired by the way the armadillo curls up into a ball when threatened, and the way the golden wheel spider uses the dynamic advantages of a sphere to roll down hills when escaping danger, the novel TRREx rover can traverse like a traditional 6-wheeled rover over conventional terrain, but can also transform itself into a sphere, when necessary, to travel down steep inclines, or navigate rough terrain. This work presents the proposed design architecture and capabilities followed by the development of mathematical models and experiments that facilitate the mobility analysis of the TRREx in the rolling mode. The ability of the rover to self-propel in the rolling mode in the absence of a negative gradient increases its versatility and concept value. Therefore, a dynamic model of a planar version of the problem is first used to investigate the feasibility and value of such self-propelled locomotion - 'actuated rolling'. Construction and testing of a prototype Planar/Cylindrical TRREx that is capable of demonstrating actuated rolling is presented, and the results from the planar dynamic model are experimentally validated. This planar model is then built upon to develop a mathematical model of the spherical TRREx in the rolling mode, i.e. when the rover is a sphere and can steer itself through actuations that shift its center of mass to achieve the desired direction of roll. Case studies that demonstrate the capabilities of the rover in rolling mode and parametric analyses that investigate the dependence of the rover's mobility on its design are presented. This work highlights the contribution of the spherical rolling mode to the enhanced mobility of the TRREx rover and how it could enable challenging surface exploration missions in the future. It represents an important step toward developing a rover capable of traversing a variety of terrains that are impassible by the current fleet of rover designs, and thus has the potential to revolutionize planetary surface exploration.

  11. Implementing an error disclosure coaching model: A multicenter case study.

    PubMed

    White, Andrew A; Brock, Douglas M; McCotter, Patricia I; Shannon, Sarah E; Gallagher, Thomas H

    2017-01-01

    National guidelines call for health care organizations to provide around-the-clock coaching for medical error disclosure. However, frontline clinicians may not always seek risk managers for coaching. As part of a demonstration project designed to improve patient safety and reduce malpractice liability, we trained multidisciplinary disclosure coaches at 8 health care organizations in Washington State. The training was highly rated by participants, although not all emerged confident in their coaching skill. This multisite intervention can serve as a model for other organizations looking to enhance existing disclosure capabilities. Success likely requires cultural change and repeated practice opportunities for coaches. © 2017 American Society for Healthcare Risk Management of the American Hospital Association.

  12. An Overview of Unsteady Pressure Measurements in the Transonic Dynamics Tunnel

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Edwards, John W.; Bennett, Robert M.

    2000-01-01

    The NASA Langley Transonic Dynamics Tunnel has served as a unique national facility for aeroelastic testing for over forty years. A significant portion of this testing has been to measure unsteady pressures on models undergoing flutter, forced oscillations, or buffet. These tests have ranged from early launch vehicle buffet to flutter of a generic high-speed transport. This paper will highlight some of the test techniques, model design approaches, and the many unsteady pressure tests conducted in the TDT. The objectives and results of the data acquired during these tests will be summarized for each case and a brief discussion of ongoing research involving unsteady pressure measurements and new TDT capabilities will be presented.

  13. Human-scale interaction for virtual model displays: a clear case for real tools

    NASA Astrophysics Data System (ADS)

    Williams, George C.; McDowall, Ian E.; Bolas, Mark T.

    1998-04-01

    We describe a hand-held user interface for interacting with virtual environments displayed on a Virtual Model Display. The tool, constructed entirely of transparent materials, is see-through. We render a graphical counterpart of the tool on the display and map it one-to-one with the real tool. This feature, combined with a capability for touch- sensitive, discrete input, results in a useful spatial input device that is visually versatile. We discuss the tool's design and interaction techniques it supports. Briefly, we look at the human factors issues and engineering challenges presented by this tool and, in general, by the class of hand-held user interfaces that are see-through.

  14. Constraint Force Equation Methodology for Modeling Multi-Body Stage Separation Dynamics

    NASA Technical Reports Server (NTRS)

    Toniolo, Matthew D.; Tartabini, Paul V.; Pamadi, Bandu N.; Hotchko, Nathaniel

    2008-01-01

    This paper discusses a generalized approach to the multi-body separation problems in a launch vehicle staging environment based on constraint force methodology and its implementation into the Program to Optimize Simulated Trajectories II (POST2), a widely used trajectory design and optimization tool. This development facilitates the inclusion of stage separation analysis into POST2 for seamless end-to-end simulations of launch vehicle trajectories, thus simplifying the overall implementation and providing a range of modeling and optimization capabilities that are standard features in POST2. Analysis and results are presented for two test cases that validate the constraint force equation methodology in a stand-alone mode and its implementation in POST2.

  15. Subgroup Benchmark Calculations for the Intra-Pellet Nonuniform Temperature Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Jung, Yeon Sang; Liu, Yuxuan

    A benchmark suite has been developed by Seoul National University (SNU) for intrapellet nonuniform temperature distribution cases based on the practical temperature profiles according to the thermal power levels. Though a new subgroup capability for nonuniform temperature distribution was implemented in MPACT, no validation calculation has been performed for the new capability. This study focuses on bench-marking the new capability through a code-to-code comparison. Two continuous-energy Monte Carlo codes, McCARD and CE-KENO, are engaged in obtaining reference solutions, and the MPACT results are compared to the SNU nTRACER using a similar cross section library and subgroup method to obtain self-shieldedmore » cross sections.« less

  16. Statechart Analysis with Symbolic PathFinder

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.

    2012-01-01

    We report here on our on-going work that addresses the automated analysis and test case generation for software systems modeled using multiple Statechart formalisms. The work is motivated by large programs such as NASA Exploration, that involve multiple systems that interact via safety-critical protocols and are designed with different Statechart variants. To verify these safety-critical systems, we have developed Polyglot, a framework for modeling and analysis of model-based software written using different Statechart formalisms. Polyglot uses a common intermediate representation with customizable Statechart semantics and leverages the analysis and test generation capabilities of the Symbolic PathFinder tool. Polyglot is used as follows: First, the structure of the Statechart model (expressed in Matlab Stateflow or Rational Rhapsody) is translated into a common intermediate representation (IR). The IR is then translated into Java code that represents the structure of the model. The semantics are provided as "pluggable" modules.

  17. Thinking Together: Modeling Clinical Decision-Support as a Sociotechnical System

    PubMed Central

    Hussain, Mustafa I.; Reynolds, Tera L.; Mousavi, Fatemeh E.; Chen, Yunan; Zheng, Kai

    2017-01-01

    Computerized clinical decision-support systems are members of larger sociotechnical systems, composed of human and automated actors, who send, receive, and manipulate artifacts. Sociotechnical consideration is rare in the literature. This makes it difficult to comparatively evaluate the success of CDS implementations, and it may also indicate that sociotechnical context receives inadequate consideration in practice. To facilitate sociotechnical consideration, we developed the Thinking Together model, a flexible diagrammatical means of representing CDS systems as sociotechnical systems. To develop this model, we examined the literature with the lens of Distributed Cognition (DCog) theory. We then present two case studies of vastly different CDSSs, one almost fully automated and the other with minimal automation, to illustrate the flexibility of the Thinking Together model. We show that this model, informed by DCog and the CDS literature, are capable of supporting both research, by enabling comparative evaluation, and practice, by facilitating explicit sociotechnical planning and communication. PMID:29854164

  18. Addition of equilibrium air to an upwind Navier-Stokes code and other first steps toward a more generalized flow solver

    NASA Technical Reports Server (NTRS)

    Rosen, Bruce S.

    1991-01-01

    An upwind three-dimensional volume Navier-Stokes code is modified to facilitate modeling of complex geometries and flow fields represented by proposed National Aerospace Plane concepts. Code enhancements include an equilibrium air model, a generalized equilibrium gas model and several schemes to simplify treatment of complex geometric configurations. The code is also restructured for inclusion of an arbitrary number of independent and dependent variables. This latter capability is intended for eventual use to incorporate nonequilibrium/chemistry gas models, more sophisticated turbulence and transition models, or other physical phenomena which will require inclusion of additional variables and/or governing equations. Comparisons of computed results with experimental data and results obtained using other methods are presented for code validation purposes. Good correlation is obtained for all of the test cases considered, indicating the success of the current effort.

  19. Monolithic superelastic rods with variable flexural stiffness for spinal fusion: modeling of the processing-properties relationship.

    PubMed

    Facchinello, Yann; Brailovski, Vladimir; Petit, Yvan; Mac-Thiong, Jean-Marc

    2014-11-01

    The concept of a monolithic Ti-Ni spinal rod with variable flexural stiffness is proposed to reduce the risks associated with spinal fusion. The variable stiffness is conferred to the rod using the Joule-heating local annealing technique. The annealing temperature and the mechanical properties' distributions resulted from this thermal treatment are numerically modeled and experimentally measured. To illustrate the possible applications of such a modeling approach, two case studies are presented: (a) optimization of the Joule-heating strategy to reduce annealing time, and (b) modulation of the rod's overall flexural stiffness using partial annealing. A numerical model of a human spine coupled with the model of the variable flexural stiffness spinal rod developed in this work can ultimately be used to maximize the stabilization capability of spinal instrumentation, while simultaneously decreasing the risks associated with spinal fusion. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Fluidized bed combustor modeling

    NASA Technical Reports Server (NTRS)

    Horio, M.; Rengarajan, P.; Krishnan, R.; Wen, C. Y.

    1977-01-01

    A general mathematical model for the prediction of performance of a fluidized bed coal combustor (FBC) is developed. The basic elements of the model consist of: (1) hydrodynamics of gas and solids in the combustor; (2) description of gas and solids contacting pattern; (3) kinetics of combustion; and (4) absorption of SO2 by limestone in the bed. The model is capable of calculating the combustion efficiency, axial bed temperature profile, carbon hold-up in the bed, oxygen and SO2 concentrations in the bubble and emulsion phases, sulfur retention efficiency and particulate carry over by elutriation. The effects of bed geometry, excess air, location of heat transfer coils in the bed, calcium to sulfur ratio in the feeds, etc. are examined. The calculated results are compared with experimental data. Agreement between the calculated results and the observed data are satisfactory in most cases. Recommendations to enhance the accuracy of prediction of the model are suggested.

  1. Requirements for multi-level systems pharmacology models to reach end-usage: the case of type 2 diabetes.

    PubMed

    Nyman, Elin; Rozendaal, Yvonne J W; Helmlinger, Gabriel; Hamrén, Bengt; Kjellsson, Maria C; Strålfors, Peter; van Riel, Natal A W; Gennemark, Peter; Cedersund, Gunnar

    2016-04-06

    We are currently in the middle of a major shift in biomedical research: unprecedented and rapidly growing amounts of data may be obtained today, from in vitro, in vivo and clinical studies, at molecular, physiological and clinical levels. To make use of these large-scale, multi-level datasets, corresponding multi-level mathematical models are needed, i.e. models that simultaneously capture multiple layers of the biological, physiological and disease-level organization (also referred to as quantitative systems pharmacology-QSP-models). However, today's multi-level models are not yet embedded in end-usage applications, neither in drug research and development nor in the clinic. Given the expectations and claims made historically, this seemingly slow adoption may seem surprising. Therefore, we herein consider a specific example-type 2 diabetes-and critically review the current status and identify key remaining steps for these models to become mainstream in the future. This overview reveals how, today, we may use models to ask scientific questions concerning, e.g., the cellular origin of insulin resistance, and how this translates to the whole-body level and short-term meal responses. However, before these multi-level models can become truly useful, they need to be linked with the capabilities of other important existing models, in order to make them 'personalized' (e.g. specific to certain patient phenotypes) and capable of describing long-term disease progression. To be useful in drug development, it is also critical that the developed models and their underlying data and assumptions are easily accessible. For clinical end-usage, in addition, model links to decision-support systems combined with the engagement of other disciplines are needed to create user-friendly and cost-efficient software packages.

  2. Modification of the MML turbulence model for adverse pressure gradient flows. M.S. Thesis - Akron Univ., 1993

    NASA Technical Reports Server (NTRS)

    Conley, Julianne M.

    1994-01-01

    Computational fluid dynamics is being used increasingly to predict flows for aerospace propulsion applications, yet there is still a need for an easy to use, computationally inexpensive turbulence model capable of accurately predicting a wide range of turbulent flows. The Baldwin-Lomax model is the most widely used algebraic model, even though it has known difficulties calculating flows with strong adverse pressure gradients and large regions of separation. The modified mixing length model (MML) was developed specifically to handle the separation which occurs on airfoils and has given significantly better results than the Baldwin-Lomax model. The success of these calculations warrants further evaluation and development of MML. The objective of this work was to evaluate the performance of MML for zero and adverse pressure gradient flows, and modify it as needed. The Proteus Navier-Stokes code was used for this study and all results were compared with experimental data and with calculations made using the Baldwin-Lomax algebraic model, which is currently available in Proteus. The MML model was first evaluated for zero pressure gradient flow over a flat plate, then modified to produce the proper boundary layer growth. Additional modifications, based on experimental data for three adverse pressure gradient flows, were also implemented. The adapted model, called MMLPG (modified mixing length model for pressure gradient flows), was then evaluated for a typical propulsion flow problem, flow through a transonic diffuser. Three cases were examined: flow with no shock, a weak shock and a strong shock. The results of these calculations indicate that the objectives of this study have been met. Overall, MMLPG is capable of accurately predicting the adverse pressure gradient flows examined in this study, giving generally better agreement with experimental data than the Baldwin-Lomax model.

  3. Method to predict external store carriage characteristics at transonic speeds

    NASA Technical Reports Server (NTRS)

    Rosen, Bruce S.

    1988-01-01

    Development of a computational method for prediction of external store carriage characteristics at transonic speeds is described. The geometric flexibility required for treatment of pylon-mounted stores is achieved by computing finite difference solutions on a five-level embedded grid arrangement. A completely automated grid generation procedure facilitates applications. Store modeling capability consists of bodies of revolution with multiple fore and aft fins. A body-conforming grid improves the accuracy of the computed store body flow field. A nonlinear relaxation scheme developed specifically for modified transonic small disturbance flow equations enhances the method's numerical stability and accuracy. As a result, treatment of lower aspect ratio, more highly swept and tapered wings is possible. A limited supersonic freestream capability is also provided. Pressure, load distribution, and force/moment correlations show good agreement with experimental data for several test cases. A detailed computer program description for the Transonic Store Carriage Loads Prediction (TSCLP) Code is included.

  4. A Measurement Framework for Team Level Assessment of Innovation Capability in Early Requirements Engineering

    NASA Astrophysics Data System (ADS)

    Regnell, Björn; Höst, Martin; Nilsson, Fredrik; Bengtsson, Henrik

    When developing software-intensive products for a market-place it is important for a development organisation to create innovative features for coming releases in order to achieve advantage over competitors. This paper focuses on assessment of innovation capability at team level in relation to the requirements engineering that is taking place before the actual product development projects are decided, when new business models, technology opportunities and intellectual property rights are created and investigated through e.g. prototyping and concept development. The result is a measurement framework focusing on four areas: innovation elicitation, selection, impact and ways-of-working. For each area, candidate measurements were derived from interviews to be used as inspiration in the development of a tailored measurement program. The framework is based on interviews with participants of a software team with specific innovation responsibilities and validated through cross-case analysis and feedback from practitioners.

  5. High fidelity studies of exploding foil initiator bridges, Part 2: Experimental results

    NASA Astrophysics Data System (ADS)

    Neal, William; Bowden, Mike

    2017-01-01

    Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage, and in the case of EFIs, flyer velocity. Experimental methods have correspondingly generally been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA MHD, it is now possible to simulate these components in three dimensions and predict greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately verified. In this second paper of a three part study, data is presented from a flexible foil EFI header experiment. This study has shown that there is significant bridge expansion before time of peak voltage and that heating within the bridge material is spatially affected by the microstructure of the metal foil.

  6. Investigation of upwind, multigrid, multiblock numerical schemes for three dimensional flows. Volume 1: Runge-Kutta methods for a thin layer Navier-Stokes solver

    NASA Technical Reports Server (NTRS)

    Cannizzaro, Frank E.; Ash, Robert L.

    1992-01-01

    A state-of-the-art computer code has been developed that incorporates a modified Runge-Kutta time integration scheme, upwind numerical techniques, multigrid acceleration, and multi-block capabilities (RUMM). A three-dimensional thin-layer formulation of the Navier-Stokes equations is employed. For turbulent flow cases, the Baldwin-Lomax algebraic turbulence model is used. Two different upwind techniques are available: van Leer's flux-vector splitting and Roe's flux-difference splitting. Full approximation multi-grid plus implicit residual and corrector smoothing were implemented to enhance the rate of convergence. Multi-block capabilities were developed to provide geometric flexibility. This feature allows the developed computer code to accommodate any grid topology or grid configuration with multiple topologies. The results shown in this dissertation were chosen to validate the computer code and display its geometric flexibility, which is provided by the multi-block structure.

  7. Performance Analysis of Direct-Sequence Code-Division Multiple-Access Communications with Asymmetric Quadrature Phase-Shift-Keying Modulation

    NASA Technical Reports Server (NTRS)

    Wang, C.-W.; Stark, W.

    2005-01-01

    This article considers a quaternary direct-sequence code-division multiple-access (DS-CDMA) communication system with asymmetric quadrature phase-shift-keying (AQPSK) modulation for unequal error protection (UEP) capability. Both time synchronous and asynchronous cases are investigated. An expression for the probability distribution of the multiple-access interference is derived. The exact bit-error performance and the approximate performance using a Gaussian approximation and random signature sequences are evaluated by extending the techniques used for uniform quadrature phase-shift-keying (QPSK) and binary phase-shift-keying (BPSK) DS-CDMA systems. Finally, a general system model with unequal user power and the near-far problem is considered and analyzed. The results show that, for a system with UEP capability, the less protected data bits are more sensitive to the near-far effect that occurs in a multiple-access environment than are the more protected bits.

  8. Smart Operations in Distributed Energy Resources System

    NASA Astrophysics Data System (ADS)

    Wei, Li; Jie, Shu; Zhang-XianYong; Qing, Zhou

    Smart grid capabilities are being proposed to help solve the challenges concerning system operations due to that the trade-offs between energy and environmental needs will be constantly negotiated while a reliable supply of electricity needs even greater assurance in case of that threats of disruption have risen. This paper mainly explores models for distributed energy resources system (DG, storage, and load),and also reviews the evolving nature of electricity markets to deal with this complexity and a change of emphasis on signals from these markets to affect power system control. Smart grid capabilities will also impact reliable operations, while cyber security issues must be solved as a culture change that influences all system design, implementation, and maintenance. Lastly, the paper explores significant questions for further research and the need for a simulation environment that supports such investigation and informs deployments to mitigate operational issues as they arise.

  9. Data Storage and sharing for the long tail of science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B.; Pouchard, L.; Smith, P. M.

    Research data infrastructure such as storage must now accommodate new requirements resulting from trends in research data management that require researchers to store their data for the long term and make it available to other researchers. We propose Data Depot, a system and service that provides capabilities for shared space within a group, shared applications, flexible access patterns and ease of transfer at Purdue University. We evaluate Depot as a solution for storing and sharing multiterabytes of data produced in the long tail of science with a use case in soundscape ecology studies from the Human- Environment Modeling and Analysismore » Laboratory. We observe that with the capabilities enabled by Data Depot, researchers can easily deploy fine-grained data access control, manage data transfer and sharing, as well as integrate their workflows into a High Performance Computing environment.« less

  10. ALGEBRA: ALgorithm for the heterogeneous dosimetry based on GEANT4 for BRAchytherapy.

    PubMed

    Afsharpour, H; Landry, G; D'Amours, M; Enger, S; Reniers, B; Poon, E; Carrier, J-F; Verhaegen, F; Beaulieu, L

    2012-06-07

    Task group 43 (TG43)-based dosimetry algorithms are efficient for brachytherapy dose calculation in water. However, human tissues have chemical compositions and densities different than water. Moreover, the mutual shielding effect of seeds on each other (interseed attenuation) is neglected in the TG43-based dosimetry platforms. The scientific community has expressed the need for an accurate dosimetry platform in brachytherapy. The purpose of this paper is to present ALGEBRA, a Monte Carlo platform for dosimetry in brachytherapy which is sufficiently fast and accurate for clinical and research purposes. ALGEBRA is based on the GEANT4 Monte Carlo code and is capable of handling the DICOM RT standard to recreate a virtual model of the treated site. Here, the performance of ALGEBRA is presented for the special case of LDR brachytherapy in permanent prostate and breast seed implants. However, the algorithm is also capable of handling other treatments such as HDR brachytherapy.

  11. Development of a Suite of Analytical Tools for Energy and Water Infrastructure Knowledge Discovery

    NASA Astrophysics Data System (ADS)

    Morton, A.; Piburn, J.; Stewart, R.; Chandola, V.

    2017-12-01

    Energy and water generation and delivery systems are inherently interconnected. With demand for energy growing, the energy sector is experiencing increasing competition for water. With increasing population and changing environmental, socioeconomic, and demographic scenarios, new technology and investment decisions must be made for optimized and sustainable energy-water resource management. This also requires novel scientific insights into the complex interdependencies of energy-water infrastructures across multiple space and time scales. To address this need, we've developed a suite of analytical tools to support an integrated data driven modeling, analysis, and visualization capability for understanding, designing, and developing efficient local and regional practices related to the energy-water nexus. This work reviews the analytical capabilities available along with a series of case studies designed to demonstrate the potential of these tools for illuminating energy-water nexus solutions and supporting strategic (federal) policy decisions.

  12. IVAN: Intelligent Van for the Distribution of Pharmaceutical Drugs

    PubMed Central

    Moreno, Asier; Angulo, Ignacio; Perallos, Asier; Landaluce, Hugo; Zuazola, Ignacio Julio García; Azpilicueta, Leire; Astrain, José Javier; Falcone, Francisco; Villadangos, Jesús

    2012-01-01

    This paper describes a telematic system based on an intelligent van which is capable of tracing pharmaceutical drugs over delivery routes from a warehouse to pharmacies, without altering carriers' daily conventional tasks. The intelligent van understands its environment, taking into account its location, the assets and the predefined delivery route; with the capability of reporting incidences to carriers in case of failure according to the established distribution plan. It is a non-intrusive solution which represents a successful experience of using smart environments and an optimized Radio Frequency Identification (RFID) embedded system in a viable way to resolve a real industrial need in the pharmaceutical industry. The combination of deterministic modeling of the indoor vehicle, the implementation of an ad-hoc radiating element and an agile software platform within an overall system architecture leads to a competitive, flexible and scalable solution. PMID:22778659

  13. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    PubMed

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. On testing two major cumulus parameterization schemes using the CSU Regional Atmospheric Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.

    1993-10-01

    One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In themore » 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).« less

  15. Deep Space Network capabilities for receiving weak probe signals

    NASA Technical Reports Server (NTRS)

    Asmar, Sami; Johnston, Doug; Preston, Robert

    2004-01-01

    This paper will describe the capability and highlight the cases of the critical communications for the Mars rovers and Saturn Orbit Insertion and preparation radio tracking of the Huygens probe at (non-DSN) radio telescopes.

  16. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  17. ProTSAV: A protein tertiary structure analysis and validation server.

    PubMed

    Singh, Ankita; Kaushik, Rahul; Mishra, Avinash; Shanker, Asheesh; Jayaram, B

    2016-01-01

    Quality assessment of predicted model structures of proteins is as important as the protein tertiary structure prediction. A highly efficient quality assessment of predicted model structures directs further research on function. Here we present a new server ProTSAV, capable of evaluating predicted model structures based on some popular online servers and standalone tools. ProTSAV furnishes the user with a single quality score in case of individual protein structure along with a graphical representation and ranking in case of multiple protein structure assessment. The server is validated on ~64,446 protein structures including experimental structures from RCSB and predicted model structures for CASP targets and from public decoy sets. ProTSAV succeeds in predicting quality of protein structures with a specificity of 100% and a sensitivity of 98% on experimentally solved structures and achieves a specificity of 88%and a sensitivity of 91% on predicted protein structures of CASP11 targets under 2Å.The server overcomes the limitations of any single server/method and is seen to be robust in helping in quality assessment. ProTSAV is freely available at http://www.scfbio-iitd.res.in/software/proteomics/protsav.jsp. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Proceedings of the First All-USGS Modeling Conference, November 14-17, 2005

    USGS Publications Warehouse

    Frondorf, Anne

    2007-01-01

    Preface: The First All-USGS Modeling Conference was held November 14-17, 2005, in Port Angeles, Washington. U.S. Geological Survey (USGS) participants at the conference came from USGS headquarters and all USGS regions and represented all four science disciplines-Biology, Geography, Geology, and Water. The conference centered on selected oral case study presentations and posters on current USGS scientific modeling capabilities and activities. Abstracts for these case study presentations and posters are presented here. On behalf of all the participants of the First All-USGS Modeling Conference, we appreciate the support of Dee Ann Nelson and the staff of the Olympic Park Institute in providing the conference facilities; Dr. Jerry Freilich and Dr. Brian Winter of the National Park Service, Olympic National Park, for organizing and leading the conference field trip; and Debra Becker and Amy Newman, USGS Western Fisheries Research Center, Seattle, Washington, and Tammy Hansel, USGS Geospatial Information Office, Reston, Virginia, for providing technical support for the conference. The organizing committee for the conference included Jenifer Bracewell, Jacoby Carter, Jeff Duda, Anne Frondorf, Linda Gundersen, Tom Gunther, Pat Jellison, Rama Kotra, George Leavesley, and Doug Muchoney.

  19. The Possibilities for University-Based Public-Good Professional Education: A Case-Study from South Africa Based on the "Capability Approach"

    ERIC Educational Resources Information Center

    McLean, Monica; Walker, Melanie

    2012-01-01

    The education of professionals oriented to poverty reduction and the public good is the focus of the article. Sen's "capability approach" is used to conceptualise university-based professional education as a process of developing public-good professional capabilities. The main output of a research project on professional education in…

  20. Steel Shear Walls, Behavior, Modeling and Design

    NASA Astrophysics Data System (ADS)

    Astaneh-Asl, Abolhassan

    2008-07-01

    In recent years steel shear walls have become one of the more efficient lateral load resisting systems in tall buildings. The basic steel shear wall system consists of a steel plate welded to boundary steel columns and boundary steel beams. In some cases the boundary columns have been concrete-filled steel tubes. Seismic behavior of steel shear wall systems during actual earthquakes and based on laboratory cyclic tests indicates that the systems are quite ductile and can be designed in an economical way to have sufficient stiffness, strength, ductility and energy dissipation capacity to resist seismic effects of strong earthquakes. This paper, after summarizing the past research, presents the results of two tests of an innovative steel shear wall system where the boundary elements are concrete-filled tubes. Then, a review of currently available analytical models of steel shear walls is provided with a discussion of capabilities and limitations of each model. We have observed that the tension only "strip model", forming the basis of the current AISC seismic design provisions for steel shear walls, is not capable of predicting the behavior of steel shear walls with length-to-thickness ratio less than about 600 which is the range most common in buildings. The main reasons for such shortcomings of the AISC seismic design provisions for steel shear walls is that it ignores the compression field in the shear walls, which can be significant in typical shear walls. The AISC method also is not capable of incorporating stresses in the shear wall due to overturning moments. A more rational seismic design procedure for design of shear walls proposed in 2000 by the author is summarized in the paper. The design method, based on procedures used for design of steel plate girders, takes into account both tension and compression stress fields and is applicable to all values of length-to-thickness ratios of steel shear walls. The method is also capable of including the effect of overturning moments and any normal forces that might act on the steel shear wall.

Top