DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyack, B.E.; Dhir, V.K.; Gieseke, J.A.
1992-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. The newest version of MELCOR is Version 1.8.1, July 1991. MELCOR development has reached the point that the United States Nuclear Regulatory Commission sponsored a broad technical review by recognized experts to determine or confirm the technical adequacy of the code for the serious and complex analyses it is expected to perform. For this purpose, an eight-member MELCOR Peer Review Committee was organized. The Committee has completed its review of the MELCOR code: the review process and findingsmore » of the MELCOR Peer Review Committee are documented in this report. The Committee has determined that recommendations in five areas are appropriate: (1) MELCOR numerics, (2) models missing from MELCOR Version 1.8.1, (3) existing MELCOR models needing revision, (4) the need for expanded MELCOR assessment, and (5) documentation.« less
Containment Sodium Chemistry Models in MELCOR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louie, David; Humphries, Larry L.; Denman, Matthew R
To meet regulatory needs for sodium fast reactors’ future development, including licensing requirements, Sandia National Laboratories is modernizing MELCOR, a severe accident analysis computer code developed for the U.S. Nuclear Regulatory Commission (NRC). Specifically, Sandia is modernizing MELCOR to include the capability to model sodium reactors. However, Sandia’s modernization effort primarily focuses on the containment response aspects of the sodium reactor accidents. Sandia began modernizing MELCOR in 2013 to allow a sodium coolant, rather than water, for conventional light water reactors. In the past three years, Sandia has been implementing the sodium chemistry containment models in CONTAIN-LMR, a legacy NRCmore » code, into MELCOR. These chemistry models include spray fire, pool fire and atmosphere chemistry models. Only the first two chemistry models have been implemented though it is intended to implement all these models into MELCOR. A new package called “NAC” has been created to manage the sodium chemistry model more efficiently. In 2017 Sandia began validating the implemented models in MELCOR by simulating available experiments. The CONTAIN-LMR sodium models include sodium atmosphere chemistry and sodium-concrete interaction models. This paper presents sodium property models, the implemented models, implementation issues, and a path towards validation against existing experimental data.« less
MELCOR model for an experimental 17x17 spent fuel PWR assembly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoni, Jeffrey
2010-11-01
A MELCOR model has been developed to simulate a pressurized water reactor (PWR) 17 x 17 assembly in a spent fuel pool rack cell undergoing severe accident conditions. To the extent possible, the MELCOR model reflects the actual geometry, materials, and masses present in the experimental arrangement for the Sandia Fuel Project (SFP). The report presents an overview of the SFP experimental arrangement, the MELCOR model specifications, demonstration calculation results, and the input model listing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, andmore » combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.« less
MELCOR computer code manuals: Primer and user`s guides, Version 1.8.3 September 1994. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.
1995-03-01
MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the US Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, andmore » combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users` Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.« less
MELCOR/CONTAIN LMR Implementation Report - FY16 Progress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louie, David; Humphries, Larry L.
2016-11-01
This report describes the progress of the CONTAIN - LMR sodium physics and chemistry models to be implemented in MELCOR 2.1. In the past three years , the implementation included the addition of sodium equations of state and sodium properties from two different sources. The first source is based on the previous work done by Idaho National Laboratory by modifying MELCOR to include liquid lithium equation of state as a working fluid to model the nuclear fusion safety research. The second source uses properties generated for the SIMMER code. The implemented modeling has been tested and results are reported inmore » this document. In addition, the CONTAIN - LMR code was derived from an early version of the CONTAIN code, and many physical models that were developed since this early version of CONTAIN are not available in this early code version. Therefore, CONTAIN 2 has been updated with the sodium models in CONTAIN - LMR as CONTAIN2 - LMR, which may be used to provide code-to-code comparison with CONTAIN - LMR and MELCOR when the sodium chemistry models from CONTAIN - LMR have been completed. Both the spray fire and pool fire chemistry routines from CONTAIN - LMR have been integrated into MELCOR 2.1, and debugging and testing are in progress. Because MELCOR only models the equation of state for liquid and gas phases of the coolant, a modeling gap still exists when dealing with experiments or accident conditions that take place when the ambient temperature is below the freezing point of sodium. An alternative method is under investigation to overcome this gap . We are no longer working on the separate branch from the main branch of MELCOR 2.1 since the major modeling of MELCOR 2.1 has been completed. At the current stage, the newly implemented sodium chemistry models will be a part of the main MELCOR release version (MELCOR 2.2). This report will discuss the accomplishments and issues relating to the implementation. Also, we will report on the planned completion of all remaining tasks in the upcoming FY2017, including the atmospheric chemistry model and sodium - concrete interaction model implementation .« less
Yoon, Dhongik S; Jo, HangJin; Corradini, Michael L
2017-04-01
Condensation of steam vapor is an important mode of energy removal from the reactor containment. The presence of noncondensable gas complicates the process and makes it difficult to model. MELCOR, one of the more widely used system codes for containment analyses, uses the heat and mass transfer analogy to model condensation heat transfer. To investigate previously reported nodalization-dependence in natural convection flow regime, MELCOR condensation model as well as other models are studied. The nodalization-dependence issue is resolved by using physical length from the actual geometry rather than node size of each control volume as the characteristic length scale formore » MELCOR containment analyses. At the transition to turbulent natural convection regime, the McAdams correlation for convective heat transfer produces a better prediction compared to the original MELCOR model. The McAdams correlation is implemented in MELCOR and the prediction is validated against a set of experiments on a scaled AP600 containment. The MELCOR with our implemented model produces improved predictions. For steam molar fractions in the gas mixture greater than about 0.58, the predictions are within the uncertainty margin of the measurements. The simulation results still underestimate the heat transfer from the gas-steam mixture, implying that conservative predictions are provided.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Dhongik S; Jo, HangJin; Corradini, Michael L
Condensation of steam vapor is an important mode of energy removal from the reactor containment. The presence of noncondensable gas complicates the process and makes it difficult to model. MELCOR, one of the more widely used system codes for containment analyses, uses the heat and mass transfer analogy to model condensation heat transfer. To investigate previously reported nodalization-dependence in natural convection flow regime, MELCOR condensation model as well as other models are studied. The nodalization-dependence issue is resolved by using physical length from the actual geometry rather than node size of each control volume as the characteristic length scale formore » MELCOR containment analyses. At the transition to turbulent natural convection regime, the McAdams correlation for convective heat transfer produces a better prediction compared to the original MELCOR model. The McAdams correlation is implemented in MELCOR and the prediction is validated against a set of experiments on a scaled AP600 containment. The MELCOR with our implemented model produces improved predictions. For steam molar fractions in the gas mixture greater than about 0.58, the predictions are within the uncertainty margin of the measurements. The simulation results still underestimate the heat transfer from the gas-steam mixture, implying that conservative predictions are provided.« less
Development of a MELCOR Sodium Chemistry (NAC) Package - FY17 Progress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louie, David; Humphries, Larry L.
This report describes the status of the development of MELCOR Sodium Chemistry (NAC) package. This development is based on the CONTAIN-LMR sodium physics and chemistry models to be implemented in MELCOR. In the past three years, the sodium equation of state as a working fluid from the nuclear fusion safety research and from the SIMMER code has been implemented into MELCOR. The chemistry models from the CONTAIN-LMR code, such as the spray and pool fire mode ls, have also been implemented into MELCOR. This report describes the implemented models and the issues encountered. Model descriptions and input descriptions are provided.more » Development testing of the spray and pool fire models is described, including the code-to-code comparison with CONTAIN-LMR. The report ends with an expected timeline for the remaining models to be implemented, such as the atmosphere chemistry, sodium-concrete interactions, and experimental validation tests .« less
Quicklook overview of model changes in Melcor 2.2: Rev 6342 to Rev 9496
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphries, Larry L.
2017-05-01
MELCOR 2.2 is a significant official release of the MELCOR code with many new models and model improvements. This report provides the code user with a quick review and characterization of new models added, changes to existing models, the effect of code changes during this code development cycle (rev 6342 to rev 9496), a preview of validation results with this code version. More detailed information is found in the code Subversion logs as well as the User Guide and Reference Manuals.
MELCOR simulations of the severe accident at Fukushima Daiichi Unit 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoni, Jeffrey; Gauntt, Randall; Kalinich, Donald
In response to the accident at the Fukushima Daiichi nuclear power station in Japan, the U.S. Nuclear Regulatory Commission and U.S. Department of Energy agreed to jointly sponsor an accident reconstruction study as a means of assessing the severe accident modeling capability of the MELCOR code. Objectives of the project included reconstruction of the accident progressions using computer models and accident data, and validation of the MELCOR code and the Fukushima models against plant data. A MELCOR 2.1 model of the Fukushima Daiichi Unit 3 reactor is developed using plant-specific information and accident-specific boundary conditions, which involve considerable uncertainty duemore » to the inherent nature of severe accidents. Publicly available thermal-hydraulic data and radioactivity release estimates have evolved significantly since the accidents. Such data are expected to continually change as the reactors are decommissioned and more measurements are performed. As a result, the MELCOR simulations in this work primarily use boundary conditions that are based on available plant data as of May 2012.« less
MELCOR simulations of the severe accident at Fukushima Daiichi Unit 3
Cardoni, Jeffrey; Gauntt, Randall; Kalinich, Donald; ...
2014-05-01
In response to the accident at the Fukushima Daiichi nuclear power station in Japan, the U.S. Nuclear Regulatory Commission and U.S. Department of Energy agreed to jointly sponsor an accident reconstruction study as a means of assessing the severe accident modeling capability of the MELCOR code. Objectives of the project included reconstruction of the accident progressions using computer models and accident data, and validation of the MELCOR code and the Fukushima models against plant data. A MELCOR 2.1 model of the Fukushima Daiichi Unit 3 reactor is developed using plant-specific information and accident-specific boundary conditions, which involve considerable uncertainty duemore » to the inherent nature of severe accidents. Publicly available thermal-hydraulic data and radioactivity release estimates have evolved significantly since the accidents. Such data are expected to continually change as the reactors are decommissioned and more measurements are performed. As a result, the MELCOR simulations in this work primarily use boundary conditions that are based on available plant data as of May 2012.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbajo, J.J.
1995-12-31
This study compares results obtained with two U.S. Nuclear Regulatory Commission (NRC)-sponsored codes, MELCOR version 1.8.3 (1.8PQ) and SCDAP/RELAP5 Mod3.1 release C, for the same transient - a low-pressure, short-term station blackout accident at the Browns Ferry nuclear plant. This work is part of MELCOR assessment activities to compare core damage progression calculations of MELCOR against SCDAP/RELAP5 since the two codes model core damage progression very differently.
MELCOR/CONTAIN LMR Implementation Report. FY14 Progress
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphries, Larry L; Louie, David L.Y.
2014-10-01
This report describes the preliminary implementation of the sodium thermophysical properties and the design documentation for the sodium models of CONTAIN-LMR to be implemented into MELCOR 2.1. In the past year, the implementation included two separate sodium properties from two different sources. The first source is based on the previous work done by Idaho National Laboratory by modifying MELCOR to include liquid lithium equation of state as a working fluid to model the nuclear fusion safety research. To minimize the impact to MELCOR, the implementation of the fusion safety database (FSD) was done by utilizing the detection of the datamore » input file as a way to invoking the FSD. The FSD methodology has been adapted currently for this work, but it may subject modification as the project continues. The second source uses properties generated for the SIMMER code. Preliminary testing and results from this implementation of sodium properties are given. In this year, the design document for the CONTAIN-LMR sodium models, such as the two condensable option, sodium spray fire, and sodium pool fire is being developed. This design document is intended to serve as a guide for the MELCOR implementation. In addition, CONTAIN-LMR code used was based on the earlier version of CONTAIN code. Many physical models that were developed since this early version of CONTAIN may not be captured by the code. Although CONTAIN 2, which represents the latest development of CONTAIN, contains some sodium specific models, which are not complete, the utilizing CONTAIN 2 with all sodium models implemented from CONTAIN-LMR as a comparison code for MELCOR should be done. This implementation should be completed in early next year, while sodium models from CONTAIN-LMR are being integrated into MELCOR. For testing, CONTAIN decks have been developed for verification and validation use.« less
MELCOR/CONTAIN LMR Implementation Report-Progress FY15
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphries, Larry L.; Louie, David L.Y.
2016-01-01
This report describes the progress of the CONTAIN-LMR sodium physics and chemistry models to be implemented in to MELCOR 2.1. It also describes the progress to implement these models into CONT AIN 2 as well. In the past two years, the implementation included the addition of sodium equations of state and sodium properties from two different sources. The first source is based on the previous work done by Idaho National Laborat ory by modifying MELCOR to include liquid lithium equation of state as a working fluid to mode l the nuclear fusion safety research. The second source uses properties generatedmore » for the SIMMER code. Testing and results from this implementation of sodium pr operties are given. In addition, the CONTAIN-LMR code was derived from an early version of C ONTAIN code. Many physical models that were developed sin ce this early version of CONTAIN are not captured by this early code version. Therefore, CONTAIN 2 is being updated with the sodium models in CONTAIN-LMR in or der to facilitate verification of these models with the MELCOR code. Although CONTAIN 2, which represents the latest development of CONTAIN, now contains ma ny of the sodium specific models, this work is not complete due to challenges from the lower cell architecture in CONTAIN 2, which is different from CONTAIN- LMR. This implementation should be completed in the coming year, while sodi um models from C ONTAIN-LMR are being integrated into MELCOR. For testing, CONTAIN decks have been developed for verification and validation use. In terms of implementing the sodium m odels into MELCOR, a separate sodium model branch was created for this document . Because of massive development in the main stream MELCOR 2.1 code and the require ment to merge the latest code version into this branch, the integration of the s odium models were re-directed to implement the sodium chemistry models first. This change led to delays of the actual implementation. For aid in the future implementation of sodium models, a new sodium chemistry package was created. Thus reporting for the implementation of the sodium chemistry is discussed in this report.« less
NSRD-10: Leak Path Factor Guidance Using MELCOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louie, David; Humphries, Larry L.
Estimates of the source term from a U.S. Department of Energy (DOE) nuclear facility requires that the analysts know how to apply the simulation tools used, such as the MELCOR code, particularly for a complicated facility that may include an air ventilation system and other active systems that can influence the environmental pathway of the materials released. DOE has designated MELCOR 1.8.5, an unsupported version, as a DOE ToolBox code in its Central Registry, which includes a leak-path-factor guidance report written in 2004 that did not include experimental validation data. To continue to use this MELCOR version requires additional verificationmore » and validations, which may not be feasible from a project cost standpoint. Instead, the recent MELCOR should be used. Without any developer support and lack of experimental data validation, it is difficult to convince regulators that the calculated source term from the DOE facility is accurate and defensible. This research replaces the obsolete version in the 2004 DOE leak path factor guidance report by using MELCOR 2.1 (the latest version of MELCOR with continuing modeling development and user support) and by including applicable experimental data from the reactor safety arena and from applicable experimental data used in the DOE-HDBK-3010. This research provides best practice values used in MELCOR 2.1 specifically for the leak path determination. With these enhancements, the revised leak-path-guidance report should provide confidence to the DOE safety analyst who would be using MELCOR as a source-term determination tool for mitigated accident evaluations.« less
Ex-Vessel Core Melt Modeling Comparison between MELTSPREAD-CORQUENCH and MELCOR 2.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R.; Farmer, Mitchell; Francis, Matthew W.
System-level code analyses by both United States and international researchers predict major core melting, bottom head failure, and corium-concrete interaction for Fukushima Daiichi Unit 1 (1F1). Although system codes such as MELCOR and MAAP are capable of capturing a wide range of accident phenomena, they currently do not contain detailed models for evaluating some ex-vessel core melt behavior. However, specialized codes containing more detailed modeling are available for melt spreading such as MELTSPREAD as well as long-term molten corium-concrete interaction (MCCI) and debris coolability such as CORQUENCH. In a preceding study, Enhanced Ex-Vessel Analysis for Fukushima Daiichi Unit 1: Meltmore » Spreading and Core-Concrete Interaction Analyses with MELTSPREAD and CORQUENCH, the MELTSPREAD-CORQUENCH codes predicted the 1F1 core melt readily cooled in contrast to predictions by MELCOR. The user community has taken notice and is in the process of updating their systems codes; specifically MAAP and MELCOR, to improve and reduce conservatism in their ex-vessel core melt models. This report investigates why the MELCOR v2.1 code, compared to the MELTSPREAD and CORQUENCH 3.03 codes, yield differing predictions of ex-vessel melt progression. To accomplish this, the differences in the treatment of the ex-vessel melt with respect to melt spreading and long-term coolability are examined. The differences in modeling approaches are summarized, and a comparison of example code predictions is provided.« less
Insights Gained from Forensic Analysis with MELCOR of the Fukushima-Daiichi Accidents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Nathan C.; Gauntt, Randall O.
Since the accidents at Fukushima-Daiichi, Sandia National Laboratories has been modeling these accident scenarios using the severe accident analysis code, MELCOR. MELCOR is a widely used computer code developed at Sandia National Laboratories since ~1982 for the U.S. Nuclear Regulatory Commission. Insights from the modeling of these accidents is being used to better inform future code development and potentially improved accident management. To date, our necessity to better capture in-vessel thermal-hydraulic and ex-vessel melt coolability and concrete interactions has led to the implementation of new models. The most recent analyses, presented in this paper, have been in support of themore » of the Organization for Economic Cooperation and Development Nuclear Energy Agency’s (OECD/NEA) Benchmark Study of the Accident at the Fukushima Daiichi Nuclear Power Station (BSAF) Project. The goal of this project is to accurately capture the source term from all three releases and then model the atmospheric dispersion. In order to do this, a forensic approach is being used in which available plant data and release timings is being used to inform the modeled MELCOR accident scenario. For example, containment failures, core slumping events and lower head failure timings are all enforced parameters in these analyses. This approach is fundamentally different from a blind code assessment analysis often used in standard problem exercises. The timings of these events are informed by representative spikes or decreases in plant data. The combination of improvements to the MELCOR source code resulting from analysis previous accident analysis and this forensic approach has allowed Sandia to generate representative and plausible source terms for all three accidents at Fukushima Daiichi out to three weeks after the accident to capture both early and late releases. In particular, using the source terms developed by MELCOR, the MACCS software code, which models atmospheric dispersion and deposition, we are able to reasonably capture the deposition of radionuclides to the northwest of the reactor site.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Rosa, Felice
2006-07-01
In the ambit of the Severe Accident Network of Excellence Project (SARNET), funded by the European Union, 6. FISA (Fission Safety) Programme, one of the main tasks is the development and validation of the European Accident Source Term Evaluation Code (ASTEC Code). One of the reference codes used to compare ASTEC results, coming from experimental and Reactor Plant applications, is MELCOR. ENEA is a SARNET member and also an ASTEC and MELCOR user. During the first 18 months of this project, we performed a series of MELCOR and ASTEC calculations referring to a French PWR 900 MWe and to themore » accident sequence of 'Loss of Steam Generator (SG) Feedwater' (known as H2 sequence in the French classification). H2 is an accident sequence substantially equivalent to a Station Blackout scenario, like a TMLB accident, with the only difference that in H2 sequence the scram is forced to occur with a delay of 28 seconds. The main events during the accident sequence are a loss of normal and auxiliary SG feedwater (0 s), followed by a scram when the water level in SG is equal or less than 0.7 m (after 28 seconds). There is also a main coolant pumps trip when {delta}Tsat < 10 deg. C, a total opening of the three relief valves when Tric (core maximal outlet temperature) is above 603 K (330 deg. C) and accumulators isolation when primary pressure goes below 1.5 MPa (15 bar). Among many other points, it is worth noting that this was the first time that a MELCOR 1.8.5 input deck was available for a French PWR 900. The main ENEA effort in this period was devoted to prepare the MELCOR input deck using the code version v.1.8.5 (build QZ Oct 2000 with the latest patch 185003 Oct 2001). The input deck, completely new, was prepared taking into account structure, data and same conditions as those found inside ASTEC input decks. The main goal of the work presented in this paper is to put in evidence where and when MELCOR provides good enough results and why, in some cases mainly referring to its specific models (candling, corium pool behaviour, etc.) they were less good. A future work will be the preparation of an input deck for the new MELCOR 1.8.6. and to perform a code-to-code comparison with ASTEC v1.2 rev. 1. (author)« less
Recent MELCOR and VICTORIA Fission Product Research at the NRC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bixler, N.E.; Cole, R.K.; Gauntt, R.O.
1999-01-21
The MELCOR and VICTORIA severe accident analysis codes, which were developed at Sandia National Laboratories for the U. S. Nuclear Regulatory Commission, are designed to estimate fission product releases during nuclear reactor accidents in light water reactors. MELCOR is an integrated plant-assessment code that models the key phenomena in adequate detail for risk-assessment purposes. VICTORIA is a more specialized fission- product code that provides detailed modeling of chemical reactions and aerosol processes under the high-temperature conditions encountered in the reactor coolant system during a severe reactor accident. This paper focuses on recent enhancements and assessments of the two codes inmore » the area of fission product chemistry modeling. Recently, a model for iodine chemistry in aqueous pools in the containment building was incorporated into the MELCOR code. The model calculates dissolution of iodine into the pool and releases of organic and inorganic iodine vapors from the pool into the containment atmosphere. The main purpose of this model is to evaluate the effect of long-term revolatilization of dissolved iodine. Inputs to the model include dose rate in the pool, the amount of chloride-containing polymer, such as Hypalon, and the amount of buffering agents in the containment. Model predictions are compared against the Radioiodine Test Facility (RTF) experiments conduced by Atomic Energy of Canada Limited (AECL), specifically International Standard Problem 41. Improvements to VICTORIA's chemical reactions models were implemented as a result of recommendations from a peer review of VICTORIA that was completed last year. Specifically, an option is now included to model aerosols and deposited fission products as three condensed phases in addition to the original option of a single condensed phase. The three-condensed-phase model results in somewhat higher predicted fission product volatilities than does the single-condensed-phase model. Modeling of U02 thermochemistry was also improved, and results in better prediction of vaporization of uranium from fuel, which can react with released fission products to affect their volatility. This model also improves the prediction of fission product release rates from fuel. Finally, recent comparisons of MELCOR and VICTORIA with International Standard Problem 40 (STORM) data are presented. These comparisons focus on predicted therrnophoretic deposition, which is the dominant deposition mechanism. Sensitivity studies were performed with the codes to examine experimental and modeling uncertainties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Mattie, Patrick D.
Sandia National Laboratories (SNL) has conducted an uncertainty analysis (UA) on the Fukushima Daiichi unit (1F1) accident progression with the MELCOR code. The model used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). That study focused on reconstructing the accident progressions, as postulated by the limited plant data. This work was focused evaluation of uncertainty in core damage progression behavior and its effect on key figures-of-merit (e.g., hydrogen production, reactor damage state, fraction of intact fuel, vessel lower head failure). The primary intent of this studymore » was to characterize the range of predicted damage states in the 1F1 reactor considering state of knowledge uncertainties associated with MELCOR modeling of core damage progression and to generate information that may be useful in informing the decommissioning activities that will be employed to defuel the damaged reactors at the Fukushima Daiichi Nuclear Power Plant. Additionally, core damage progression variability inherent in MELCOR modeling numerics is investigated.« less
Recent Updates to the MELCOR 1.8.2 Code for ITER Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrill, Brad J
This report documents recent changes made to the MELCOR 1.8.2 computer code for application to the International Thermonuclear Experimental Reactor (ITER), as required by ITER Task Agreement ITA 81-18. There are four areas of change documented by this report. The first area is the addition to this code of a model for transporting HTO. The second area is the updating of the material oxidation correlations to match those specified in the ITER Safety Analysis Data List (SADL). The third area replaces a modification to an aerosol tranpsort subroutine that specified the nominal aerosol density internally with one that now allowsmore » the user to specify this density through user input. The fourth area corrected an error that existed in an air condensation subroutine of previous versions of this modified MELCOR code. The appendices of this report contain FORTRAN listings of the coding for these modifications.« less
Insight from Fukushima Daiichi Unit 3 Investigations using MELCOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R.; Francis, Matthew W.; Ott, Larry J.
During the emergency response period of the accidents that took place at Fukushima Daiichi in March of 2011, researchers at Oak Ridge National Laboratory (ORNL) conducted a number of studies using the MELCOR code to help understand what was occurring and what had occurred. During the post-accident period, the Department of Energy (DOE) and the US Nuclear Regulatory Commission (NRC) jointly sponsored a study of the Fukushima Daiichi accident with collaboration among Oak Ridge, Sandia, and Idaho national laboratories. The purpose of the study was to compile relevant data, reconstruct the accident progression using computer codes, assess the codes predictivemore » capabilities, and identify future data needs. The current paper summarizes some of the early MELCOR simulations and analyses conducted at ORNL of the Fukushima Daiichi Unit 3 accident. Extended analysis and discussion of the Unit 3 accident is also presented taking into account new knowledge and modeling refinements made since the joint DOE/NRC study.« less
MELCOR Analysis of OSU Multi-Application Small Light Water Reactor (MASLWR) Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Dhongik S.; Jo, HangJin; Fu, Wen
A multi-application small light water reactor (MASLWR) conceptual design was developed by Oregon State University (OSU) with emphasis on passive safety systems. The passive containment safety system employs condensation and natural circulation to achieve the necessary heat removal from the containment in case of postulated accidents. Containment condensation experiments at the MASLWR test facility at OSU are modeled and analyzed with MELCOR, a system-level reactor accident analysis computer code. The analysis assesses its ability to predict condensation heat transfer in the presence of noncondensable gas for accidents where high-energy steam is released into the containment. This work demonstrates MELCOR’s abilitymore » to predict the pressure-temperature response of the scaled containment. Our analysis indicates that the heat removal rates are underestimated in the experiment due to the limited locations of the thermocouples and applies corrections to these measurements by conducting integral energy analyses along with CFD simulation for confirmation. Furthermore, the corrected heat removal rate measurements and the MELCOR predictions on the heat removal rate from the containment show good agreement with the experimental data.« less
MELCOR Analysis of OSU Multi-Application Small Light Water Reactor (MASLWR) Experiment
Yoon, Dhongik S.; Jo, HangJin; Fu, Wen; ...
2017-05-23
A multi-application small light water reactor (MASLWR) conceptual design was developed by Oregon State University (OSU) with emphasis on passive safety systems. The passive containment safety system employs condensation and natural circulation to achieve the necessary heat removal from the containment in case of postulated accidents. Containment condensation experiments at the MASLWR test facility at OSU are modeled and analyzed with MELCOR, a system-level reactor accident analysis computer code. The analysis assesses its ability to predict condensation heat transfer in the presence of noncondensable gas for accidents where high-energy steam is released into the containment. This work demonstrates MELCOR’s abilitymore » to predict the pressure-temperature response of the scaled containment. Our analysis indicates that the heat removal rates are underestimated in the experiment due to the limited locations of the thermocouples and applies corrections to these measurements by conducting integral energy analyses along with CFD simulation for confirmation. Furthermore, the corrected heat removal rate measurements and the MELCOR predictions on the heat removal rate from the containment show good agreement with the experimental data.« less
Heat up and failure of BWR upper internals during a severe accident
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R.
In boiling water reactors, the shroud dome, separators, and dryers above the core are made of approximately 100,000 kg of stainless steel. During a severe accident in which the coolant boils away and exothermic oxidation of zirconium occurs, gases (steam and hydrogen) are superheated in the core region and pass through the upper internals. In this scenario, the upper internals can also be heated by thermal radiation from the hot degrading core. Historically, models of the upper internals have been relatively simple in severe accident codes. The upper internals are typically modeled in MELCOR as two lumped volumes with simplifiedmore » heat transfer characteristics and no structural integrity considerations, and with limited ability to oxidize, melt, and relocate. The potential for and the subsequent impact of the upper internals to heat up, oxidize, fail, and relocate during a severe accident was investigated. A higher fidelity representation of the shroud dome, steam separators, and steam driers was developed in MELCOR v1.8.6 by extending the core region upwards. The MELCOR modeling effort entailed adding 45 additional core cells and control volumes, 98 flow paths, and numerous control functions. The model accounts for the mechanical loading and structural integrity, oxidation, melting, flow area blockage, and relocation of the various components. Consistent with a previous study, the results indicate that the upper internals can reach high temperatures during a severe accident sufficient to lose their structural integrity and relocate. Finally, the additional 100 metric tons of stainless steel debris influences the subsequent in-vessel and ex-vessel accident progression.« less
Heat up and failure of BWR upper internals during a severe accident
Robb, Kevin R.
2017-02-21
In boiling water reactors, the shroud dome, separators, and dryers above the core are made of approximately 100,000 kg of stainless steel. During a severe accident in which the coolant boils away and exothermic oxidation of zirconium occurs, gases (steam and hydrogen) are superheated in the core region and pass through the upper internals. In this scenario, the upper internals can also be heated by thermal radiation from the hot degrading core. Historically, models of the upper internals have been relatively simple in severe accident codes. The upper internals are typically modeled in MELCOR as two lumped volumes with simplifiedmore » heat transfer characteristics and no structural integrity considerations, and with limited ability to oxidize, melt, and relocate. The potential for and the subsequent impact of the upper internals to heat up, oxidize, fail, and relocate during a severe accident was investigated. A higher fidelity representation of the shroud dome, steam separators, and steam driers was developed in MELCOR v1.8.6 by extending the core region upwards. The MELCOR modeling effort entailed adding 45 additional core cells and control volumes, 98 flow paths, and numerous control functions. The model accounts for the mechanical loading and structural integrity, oxidation, melting, flow area blockage, and relocation of the various components. Consistent with a previous study, the results indicate that the upper internals can reach high temperatures during a severe accident sufficient to lose their structural integrity and relocate. Finally, the additional 100 metric tons of stainless steel debris influences the subsequent in-vessel and ex-vessel accident progression.« less
The Fukushima Daiichi Accident Study Information Portal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shawn St. Germain; Curtis Smith; David Schwieder
This paper presents a description of The Fukushima Daiichi Accident Study Information Portal. The Information Portal was created by the Idaho National Laboratory as part of joint NRC and DOE project to assess the severe accident modeling capability of the MELCOR analysis code. The Fukushima Daiichi Accident Study Information Portal was created to collect, store, retrieve and validate information and data for use in reconstructing the Fukushima Daiichi accident. In addition to supporting the MELCOR simulations, the Portal will be the main DOE repository for all data, studies and reports related to the accident at the Fukushima Daiichi nuclear powermore » station. The data is stored in a secured (password protected and encrypted) repository that is searchable and accessible to researchers at diverse locations.« less
Final Report on ITER Task Agreement 81-08
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard L. Moore
As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles
2014-03-01
A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing themore » range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.« less
Study of steam condensation at sub-atmospheric pressure: setting a basic research using MELCOR code
NASA Astrophysics Data System (ADS)
Manfredini, A.; Mazzini, M.
2017-11-01
One of the most serious accidents that can occur in the experimental nuclear fusion reactor ITER is the break of one of the headers of the refrigeration system of the first wall of the Tokamak. This results in water-steam mixture discharge in vacuum vessel (VV), with consequent pressurization of this container. To prevent the pressure in the VV exceeds 150 KPa absolute, a system discharges the steam inside a suppression pool, at an absolute pressure of 4.2 kPa. The computer codes used to analyze such incident (eg. RELAP 5 or MELCOR) are not validated experimentally for such conditions. Therefore, we planned a basic research, in order to have experimental data useful to validate the heat transfer correlations used in these codes. After a thorough literature search on this topic, ACTA, in collaboration with the staff of ITER, defined the experimental matrix and performed the design of the experimental apparatus. For the thermal-hydraulic design of the experiments, we executed a series of calculations by MELCOR. This code, however, was used in an unconventional mode, with the development of models suited respectively to low and high steam flow-rate tests. The article concludes with a discussion of the placement of experimental data within the map featuring the phenomenon characteristics, showing the importance of the new knowledge acquired, particularly in the case of chugging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denman, Matthew R.; Brooks, Dusty Marie
Sandia National Laboratories (SNL) has conducted an uncertainty analysi s (UA) on the Fukushima Daiichi unit (1F1) accident progression wit h the MELCOR code. Volume I of the 1F1 UA discusses the physical modeling details and time history results of the UA. Volume II of the 1F1 UA discusses the statistical viewpoint. The model used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). The goal of this work was to perform a focused evaluation of uncertainty in core damage progression behavior and its effect on keymore » figures - of - merit (e.g., hydrogen production, fraction of intact fuel, vessel lower head failure) and in doing so assess the applicability of traditional sensitivity analysis techniques .« less
Fukushima Daiichi Unit 1 Ex-Vessel Prediction: Core-Concrete Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R.; Farmer, Mitchell T.; Francis, Matthew W.
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for the analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, in this paper an analysis was carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 weremore » used as input. MELTSPREAD was then used to predict the spatially dependent melt conditions and extent of spreading during relocation from the vessel. The results of the MELTSPREAD analysis are reported in a companion paper. This information was used as input for the long-term debris coolability analysis with CORQUENCH. For the MELCOR-based melt pour scenario, CORQUENCH predicted the melt would readily cool within 2.5 h after the pour, and the sumps would experience limited ablation (approximately 18 cm) under water-flooded conditions. Finally, for the MAAP-based melt pour scenarios, CORQUENCH predicted that the melt would cool in approximately 22.5 h, and the sumps would experience approximately 65 cm of concrete ablation under water-flooded conditions.« less
Fukushima Daiichi Unit 1 Ex-Vessel Prediction: Core-Concrete Interaction
Robb, Kevin R.; Farmer, Mitchell T.; Francis, Matthew W.
2016-10-31
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for the analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, in this paper an analysis was carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 weremore » used as input. MELTSPREAD was then used to predict the spatially dependent melt conditions and extent of spreading during relocation from the vessel. The results of the MELTSPREAD analysis are reported in a companion paper. This information was used as input for the long-term debris coolability analysis with CORQUENCH. For the MELCOR-based melt pour scenario, CORQUENCH predicted the melt would readily cool within 2.5 h after the pour, and the sumps would experience limited ablation (approximately 18 cm) under water-flooded conditions. Finally, for the MAAP-based melt pour scenarios, CORQUENCH predicted that the melt would cool in approximately 22.5 h, and the sumps would experience approximately 65 cm of concrete ablation under water-flooded conditions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindgren, Eric Richard; Durbin, Samuel G
2007-04-01
The objective of this project was to provide basic thermal-hydraulic data associated with a SFP complete loss-of-coolant accident. The accident conditions of interest for the SFP were simulated in a full-scale prototypic fashion (electrically-heated, prototypic assemblies in a prototypic SFP rack) so that the experimental results closely represent actual fuel assembly responses. A major impetus for this work was to facilitate code validation (primarily MELCOR) and reduce questions associated with interpretation of the experimental results. It was necessary to simulate a cluster of assemblies to represent a higher decay (younger) assembly surrounded by older, lower-power assemblies. Specifically, this program providedmore » data and analysis confirming: (1) MELCOR modeling of inter-assembly radiant heat transfer, (2) flow resistance modeling and the natural convective flow induced in a fuel assembly as it heats up in air, (3) the potential for and nature of thermal transient (i.e., Zircaloy fire) propagation, and (4) mitigation strategies concerning fuel assembly management.« less
MELCOR Model of the Spent Fuel Pool of Fukushima Dai-ichi Unit 4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbajo, Juan J
2012-01-01
Unit 4 of the Fukushima Dai-ichi Nuclear Power Plant suffered a hydrogen explosion at 6:00 am on March 15, 2011, exactly 3.64 days after the earthquake hit the plant and the off-site power was lost. The earthquake occurred on March 11 at 2:47 pm. Since the reactor of this Unit 4 was defueled on November 29, 2010, and all its fuel was stored in the spent fuel pool (SFP4), it was first believed that the explosion was caused by hydrogen generated by the spent fuel, in particular, by the recently discharged core. The hypothetical scenario was: power was lost, coolingmore » to the SFP4 water was lost, pool water heated/boiled, water level decreased, fuel was uncovered, hot Zircaloy reacted with steam, hydrogen was generated and accumulated above the pool, and the explosion occurred. Recent analyses of the radioisotopes present in the water of the SFP4 and underwater video indicated that this scenario did not occur - the fuel in this pool was not damaged and was never uncovered the hydrogen of the explosion was apparently generated in Unit 3 and transported through exhaust ducts that shared the same chimney with Unit 4. This paper will try to answer the following questions: Could that hypothetical scenario in the SFP4 had occurred? Could the spent fuel in the SPF4 generate enough hydrogen to produce the explosion that occurred 3.64 days after the earthquake? Given the magnitude of the explosion, it was estimated that at least 150 kg of hydrogen had to be generated. As part of the investigations of this accident, MELCOR models of the SFP4 were prepared and a series of calculations were completed. The latest version of MELCOR, version 2.1 (Ref. 1), was employed in these calculations. The spent fuel pool option for BWR fuel was selected in MELCOR. The MELCOR model of the SFP4 consists of a total of 1535 fuel assemblies out of which 548 assemblies are from the core defueled on Nov. 29, 2010, 783 assemblies are older assemblies, and 204 are new/fresh assemblies. The total decay heat of the fuel in the pool was, at the time of the accident, 2.284 MWt, of which 1.872 MWt were from the 548 assemblies of the last core discharged and 0.412 MWt were from the older 783 assemblies. These decay heat values were calculated at Oak Ridge National Laboratory using the ORIGEN2.2 code (Ref. 2) - they agree with values reported elsewhere (Ref. 3). The pool dimensions are 9.9 m x 12.2 m x 11.8 m (height), and with the water level at 11.5 m, the pool volume is 1389 m3, of which only 1240 m3 is water, as some volume is taken by the fuel and by the fuel racks. The initial water temperature of the SFP4 was assumed to be 301 K. The fuel racks are made of an aluminum alloy but are modeled in MELCOR with stainless steel and B4C. MELCOR calculations were completed for different initial water levels: 11.5 m (pool almost full, water is only 0.3 m below the top rim), 4.4577 m (top of the racks), 4.2 m, and 4.026 m (top of the active fuel). A calculation was also completed for a rapid loss of water due to a leak at the bottom of the pool, with the fuel rapidly uncovered and oxidized in air. Results of these calculations are shown in the enclosed Table I. The calculation with the initial water level at 11.5 m (full pool) takes 11 days for the water to boil down to the top of the fuel racks, 11.5 days for the fuel to be uncovered, 14.65 days to generate 150 kg of hydrogen and 19 days for the pool to be completely dry. The calculation with the initial water level at 4.4577 m, takes 1.1 days to uncover the fuel and 4.17 days to generate 150 kg of hydrogen. The calculation with the initial water level at 4.02 m takes 3.63 days to generate 150 kg of hydrogen this is exactly the time when the actual explosion occurred in Unit 4. Finally, fuel oxidation in air after the pool drained the water in 20 minutes, generates only 10 kg of hydrogen this is because very little steam is available and Zircaloy (Zr) oxidation with the oxygen of the air does not generate hydrogen. MELCOR calculated water levels and hydrogen generated in the SFP4 as a function of time for initial water levels of 4.457 m, 4.2 m and 4.02 m are shown in Figs. 1 and 2. Water levels increase at the beginning due to the expansion of the water during the heat-up from 301 K to 373 K. Boiling occurs after the water temperature reaches 373 K. The total amount of hydrogen generated is ~2000 kg, this amount includes hydrogen generated from Zr, which is the largest amount (~1580 kg), from stainless steel (~360 kg), and from B4C (~60 kg). In theory, it is possible to generate up to 3.4 kg of hydrogen per assembly (from oxidation of Zr in the fuel cladding and box), or a total of 4,525 kg from the hot 1331 assemblies stored in the SFP4. The hydrogen generated from oxidation of steel and B4C will be additional. So the answers to the questions are YES according to these MELCOR calculations, enough hydrogen (150 kg) could be generated in the SFP4 3.64 days after the earthquake to produce ...« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the modelmore » response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)« less
Fukushima Daiichi Unit 1 Ex-Vessel Prediction: Core Concrete Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R; Farmer, Mitchell; Francis, Matthew W
Lower head failure and corium concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, an analysis was carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 were used as input.more » MELTSPREAD was then used to predict the spatially dependent melt conditions and extent of spreading during relocation from the vessel. The results of the MELTSPREAD analysis are reported in a companion paper. This information was used as input for the long-term debris coolability analysis with CORQUENCH.« less
Fukushima Daiichi Unit 1 ex-vessel prediction: Core melt spreading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmer, M. T.; Robb, K. R.; Francis, M. W.
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, an analysis has been carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 were used as input.more » MELTSPREAD was then used to predict the spatially-dependent melt conditions and extent of spreading during relocation from the vessel. Lastly, this information was then used as input for the long-term debris coolability analysis with CORQUENCH that is reported in a companion paper.« less
Fukushima Daiichi Unit 1 ex-vessel prediction: Core melt spreading
Farmer, M. T.; Robb, K. R.; Francis, M. W.
2016-10-31
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, an analysis has been carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 were used as input.more » MELTSPREAD was then used to predict the spatially-dependent melt conditions and extent of spreading during relocation from the vessel. Lastly, this information was then used as input for the long-term debris coolability analysis with CORQUENCH that is reported in a companion paper.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Kyle W.; Gauntt, Randall O.; Cardoni, Jeffrey N.
2013-11-01
Data, a brief description of key boundary conditions, and results of Sandia National Laboratories’ ongoing MELCOR analysis of the Fukushima Unit 2 accident are given for the reactor core isolation cooling (RCIC) system. Important assumptions and related boundary conditions in the current analysis additional to or different than what was assumed/imposed in the work of SAND2012-6173 are identified. This work is for the U.S. Department of Energy’s Nuclear Energy University Programs fiscal year 2014 Reactor Safety Technologies Research and Development Program RC-7: RCIC Performance under Severe Accident Conditions.
Heat up and potential failure of BWR upper internals during a severe accident
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R
2015-01-01
In boiling water reactors, the steam dome, steam separators, and dryers above the core are comprised of approximately 100 tons of stainless steel. During a severe accident in which the coolant boils away and exothermic oxidation of zirconium occurs, gases (steam and hydrogen) are superheated in the core region and pass through the upper internals. Historically, the upper internals have been modeled using severe accident codes with relatively simple approximations. The upper internals are typically modeled in MELCOR as two lumped volumes with simplified heat transfer characteristics, with no structural integrity considerations, and with limited ability to oxidize, melt, andmore » relocate. The potential for and the subsequent impact of the upper internals to heat up, oxidize, fail, and relocate during a severe accident was investigated. A higher fidelity representation of the shroud dome, steam separators, and steam driers was developed in MELCOR v1.8.6 by extending the core region upwards. This modeling effort entailed adding 45 additional core cells and control volumes, 98 flow paths, and numerous control functions. The model accounts for the mechanical loading and structural integrity, oxidation, melting, flow area blockage, and relocation of the various components. The results indicate that the upper internals can reach high temperatures during a severe accident; they are predicted to reach a high enough temperature such that they lose their structural integrity and relocate. The additional 100 tons of stainless steel debris influences the subsequent in-vessel and ex-vessel accident progression.« less
Analysis of unmitigated large break loss of coolant accidents using MELCOR code
NASA Astrophysics Data System (ADS)
Pescarini, M.; Mascari, F.; Mostacci, D.; De Rosa, F.; Lombardo, C.; Giannetti, F.
2017-11-01
In the framework of severe accident research activity developed by ENEA, a MELCOR nodalization of a generic Pressurized Water Reactor of 900 MWe has been developed. The aim of this paper is to present the analysis of MELCOR code calculations concerning two independent unmitigated large break loss of coolant accident transients, occurring in the cited type of reactor. In particular, the analysis and comparison between the transients initiated by an unmitigated double-ended cold leg rupture and an unmitigated double-ended hot leg rupture in the loop 1 of the primary cooling system is presented herein. This activity has been performed focusing specifically on the in-vessel phenomenology that characterizes this kind of accidents. The analysis of the thermal-hydraulic transient phenomena and the core degradation phenomena is therefore here presented. The analysis of the calculated data shows the capability of the code to reproduce the phenomena typical of these transients and permits their phenomenological study. A first sequence of main events is here presented and shows that the cold leg break transient results faster than the hot leg break transient because of the position of the break. Further analyses are in progress to quantitatively assess the results of the code nodalization for accident management strategy definition and fission product source term evaluation.
Maljovec, D.; Liu, S.; Wang, B.; ...
2015-07-14
Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less
MELCOR Applications to SOARCA and Fukushima
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.
2014-03-01
This PowerPoint presentation was organized as follows: Background; Overview of Fukushima Accidents; Comparisons of SOARCA Study with Fukushima accidents; Equipment functioning in real-world accidents; and, Conclusions.
Incorporating uncertainty in RADTRAN 6.0 input files.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John
Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine ismore » required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprung, J.L.; Jow, H-N; Rollstin, J.A.
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less
NASA Astrophysics Data System (ADS)
Ott, L. J.; Robb, K. R.; Wang, D.
2015-06-01
In Section 5.2, certain material properties for "FeCrAl oxide" were not modeled based on "stainless steel oxide" as indicated in the text. Instead, the "FeCrAl oxide" material properties were modeled using the default properties in MELCOR for "zirconium oxide". The properties affected are the FeCrAl oxide density, specific heat, enthalpy, thermal conductivity, melting point, and latent heat of fusion. Table 5.1 and Figs. 5.1a-d from Section 5.2 have been corrected below. As discussed below, the overall conclusions of the paper remain unchanged.
Modeling condensation with a noncondensable gas for mixed convection flow
NASA Astrophysics Data System (ADS)
Liao, Yehong
2007-05-01
This research theoretically developed a novel mixed convection model for condensation with a noncondensable gas. The model developed herein is comprised of three components: a convection regime map; a mixed convection correlation; and a generalized diffusion layer model. These components were developed in a way to be consistent with the three-level methodology in MELCOR. The overall mixed convection model was implemented into MELCOR and satisfactorily validated with data covering a wide variety of test conditions. In the development of the convection regime map, two analyses with approximations of the local similarity method were performed to solve the multi-component two-phase boundary layer equations. The first analysis studied effects of the bulk velocity on a basic natural convection condensation process and setup conditions to distinguish natural convection from mixed convection. It was found that the superimposed velocity increases condensation heat transfer by sweeping away the noncondensable gas accumulated at the condensation boundary. The second analysis studied effects of the buoyancy force on a basic forced convection condensation process and setup conditions to distinguish forced convection from mixed convection. It was found that the superimposed buoyancy force increases condensation heat transfer by thinning the liquid film thickness and creating a steeper noncondensable gas concentration profile near the condensation interface. In the development of the mixed convection correlation accounting for suction effects, numerical data were obtained from boundary layer analysis for the three convection regimes and used to fit a curve for the Nusselt number of the mixed convection regime as a function of the Nusselt numbers of the natural and forced convection regimes. In the development of the generalized diffusion layer model, the driving potential for mass transfer was expressed as the temperature difference between the bulk and the liquid-gas interface using the Clausius-Clapeyron equation. The model was developed on a mass basis instead of a molar basis to be consistent with general conservation equations. It was found that vapor diffusion is not only driven by a gradient of the molar fraction but also a gradient of the mixture molecular weight at the diffusion layer.
Comparison of different methods used in integral codes to model coagulation of aerosols
NASA Astrophysics Data System (ADS)
Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.
2013-09-01
The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.
Severe Accident Scoping Simulations of Accident Tolerant Fuel Concepts for BWRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R.
2015-08-01
Accident-tolerant fuels (ATFs) are fuels and/or cladding that, in comparison with the standard uranium dioxide Zircaloy system, can tolerate loss of active cooling in the core for a considerably longer time period while maintaining or improving the fuel performance during normal operations [1]. It is important to note that the currently used uranium dioxide Zircaloy fuel system tolerates design basis accidents (and anticipated operational occurrences and normal operation) as prescribed by the US Nuclear Regulatory Commission. Previously, preliminary simulations of the plant response have been performed under a range of accident scenarios using various ATF cladding concepts and fully ceramicmore » microencapsulated fuel. Design basis loss of coolant accidents (LOCAs) and station blackout (SBO) severe accidents were analyzed at Oak Ridge National Laboratory (ORNL) for boiling water reactors (BWRs) [2]. Researchers have investigated the effects of thermal conductivity on design basis accidents [3], investigated silicon carbide (SiC) cladding [4], as well as the effects of ATF concepts on the late stage accident progression [5]. These preliminary analyses were performed to provide initial insight into the possible improvements that ATF concepts could provide and to identify issues with respect to modeling ATF concepts. More recently, preliminary analyses for a range of ATF concepts have been evaluated internationally for LOCA and severe accident scenarios for the Chinese CPR1000 [6] and the South Korean OPR-1000 [7] pressurized water reactors (PWRs). In addition to these scoping studies, a common methodology and set of performance metrics were developed to compare and support prioritizing ATF concepts [8]. A proposed ATF concept is based on iron-chromium-aluminum alloys (FeCrAl) [9]. With respect to enhancing accident tolerance, FeCrAl alloys have substantially slower oxidation kinetics compared to the zirconium alloys typically employed. During a severe accident, FeCrAl would tend to generate heat and hydrogen from oxidation at a slower rate compared to the zirconium-based alloys in use today. The previous study, [2], of the FeCrAl ATF concept during station blackout (SBO) severe accident scenarios in BWRs was based on simulating short term SBO (STSBO), long term SBO (LTSBO), and modified SBO scenarios occurring in a BWR-4 reactor with MARK-I containment. The analysis indicated that FeCrAl had the potential to delay the onset of fuel failure by a few hours depending on the scenario, and it could delay lower head failure by several hours. The analysis demonstrated reduced in-vessel hydrogen production. However, the work was preliminary and was based on limited knowledge of material properties for FeCrAl. Limitations of the MELCOR code were identified for direct use in modeling ATF concepts. This effort used an older version of MELCOR (1.8.5). Since these analyses, the BWR model has been updated for use in MELCOR 1.8.6 [10], and more representative material properties for FeCrAl have been modeled. Sections 2 4 present updated analyses for the FeCrAl ATF concept response during severe accidents in a BWR. The purpose of the study is to estimate the potential gains afforded by the FeCrAl ATF concept during BWR SBO scenarios.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoni, Jeffrey N.; Kalinich, Donald A.
2014-02-01
Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on keymore » figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.« less
Stand-alone containment analysis of Phébus FPT tests with ASTEC and MELCOR codes: the FPT-2 test.
Gonfiotti, Bruno; Paci, Sandro
2018-03-01
During the last 40 years, many studies have been carried out to investigate the different phenomena occurring during a Severe Accident (SA) in a Nuclear Power Plant (NPP). Such efforts have been supported by the execution of different experimental campaigns, and the integral Phébus FP tests were probably some of the most important experiments in this field. In these tests, the degradation of a Pressurized Water Reactor (PWR) fuel bundle was investigated employing different control rod materials and burn-up levels in strongly or weakly oxidizing conditions. From the findings on these and previous tests, numerical codes such as ASTEC and MELCOR have been developed to analyze the evolution of a SA in real NPPs. After the termination of the Phébus FP campaign, these two codes have been furthermore improved to implement the more recent findings coming from different experimental campaigns. Therefore, continuous verification and validation is still necessary to check that the new improvements introduced in such codes allow also a better prediction of these Phébus tests. The aim of the present work is to re-analyze the Phébus FPT-2 test employing the updated ASTEC and MELCOR code versions. The analysis focuses on the stand-alone containment aspects of this test, and three different spatial nodalizations of the containment vessel (CV) have been developed. The paper summarizes the main thermal-hydraulic results and presents different sensitivity analyses carried out on the aerosols and fission products (FP) behavior. When possible, a comparison among the results obtained during this work and by different authors in previous work is also performed. This paper is part of a series of publications covering the four Phébus FP tests using a PWR fuel bundle: FPT-0, FPT-1, FPT-2, and FPT-3, excluding the FPT-4 one, related to the study of the release of low-volatility FP and transuranic elements from a debris bed and a pool of melted fuel.
The assessment of low probability containment failure modes using dynamic PRA
NASA Astrophysics Data System (ADS)
Brunett, Acacia Joann
Although low probability containment failure modes in nuclear power plants may lead to large releases of radioactive material, these modes are typically crudely modeled in system level codes and have large associated uncertainties. Conventional risk assessment techniques (i.e. the fault-tree/event-tree methodology) are capable of accounting for these failure modes to some degree, however, they require the analyst to pre-specify the ordering of events, which can vary within the range of uncertainty of the phenomena. More recently, dynamic probabilistic risk assessment (DPRA) techniques have been developed which remove the dependency on the analyst. Through DPRA, it is now possible to perform a mechanistic and consistent analysis of low probability phenomena, with the timing of the possible events determined by the computational model simulating the reactor behavior. The purpose of this work is to utilize DPRA tools to assess low probability containment failure modes and the driving mechanisms. Particular focus is given to the risk-dominant containment failure modes considered in NUREG-1150, which has long been the standard for PRA techniques. More specifically, this work focuses on the low probability phenomena occurring during a station blackout (SBO) with late power recovery in the Zion Nuclear Power Plant, a Westinghouse pressurized water reactor (PWR). Subsequent to the major risk study performed in NUREG-1150, significant experimentation and modeling regarding the mechanisms driving containment failure modes have been performed. In light of this improved understanding, NUREG-1150 containment failure modes are reviewed in this work using the current state of knowledge. For some unresolved mechanisms, such as containment loading from high pressure melt ejection and combustion events, additional analyses are performed using the accident simulation tool MELCOR to explore the bounding containment loads for realistic scenarios. A dynamic treatment in the characterization of combustible gas ignition is also presented in this work. In most risk studies, combustion is treated simplistically in that it is assumed an ignition occurs if the gas mixture achieves a concentration favorable for ignition under the premise that an adequate ignition source is available. However, the criteria affecting ignition (such as the magnitude, location and frequency of the ignition sources) are complicated. This work demonstrates a technique for characterizing the properties of an ignition source to determine a probability of ignition. The ignition model developed in this work and implemented within a dynamic framework is utilized to analyze the implications and risk significance of late combustion events. This work also explores the feasibility of using dynamic event trees (DETs) with a deterministic sampling approach to analyze low probability phenomena. The flexibility of this approach is demonstrated through the rediscretization of containment fragility curves used in construction of the DET to show convergence to a true solution. Such a rediscretization also reduces the computational burden introduced through extremely fine fragility curve discretization by subsequent refinement of fragility curve regions of interest. Another advantage of the approach is the ability to perform sensitivity studies on the cumulative distribution functions (CDFs) used to determine branching probabilities without the need for rerunning the simulation code. Through review of the NUREG-1150 containment failure modes using the current state of knowledge, it is found that some failure modes, such as Alpha and rocket, can be excluded from further studies; other failure modes, such as failure to isolate, bypass, high pressure melt ejection (HPME), combustion-induced failure and overpressurization are still concerns to varying degrees. As part of this analysis, scoping studies performed in MELCOR show that HPME and the resulting direct containment heating (DCH) do not impose a significant threat to containment integrity. Additional scoping studies regarding the effect of recovery actions on in-vessel hydrogen generation show that reflooding a partially degraded core do not significantly affect hydrogen generation in-vessel, and the NUREG-1150 assumption that insufficient hydrogen is generated in-vessel to produce an energetic deflagration is confirmed. The DET analyses performed in this work show that very late power recovery produces the potential for very energetic combustion events which are capable of failing containment with a non-negligible probability, and that containment cooling systems have a significant impact on core concrete attack, and therefore combustible gas generation ex-vessel. Ultimately, the overall risk of combustion-induced containment failure is low, but its conditional likelihood can have a significant effect on accident mitigation strategies. It is also shown in this work that DETs are particularly well suited to examine low probability events because of their ability to rediscretize CDFs and observe solution convergence.
Air ingression calculations for selected plant transients using MELCOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kmetyk, L.N.
1994-01-01
Two sets of MELCOR calculations have been completed studying the effects of air ingression on the consequences of various severe accident scenarios. One set of calculations analyzed a station blackout with surge line failure prior to vessel breach, starting from nominal operating conditions; the other set of calculations analyzed a station blackout occurring during shutdown (refueling) conditions. Both sets of analyses were for the Surry plant, a three-loop Westinghouse PWR. For both accident scenarios, a basecase calculation was done, and then repeated with air ingression from containment into the core region following core degradation and vessel failure. In addition tomore » the two sets of analyses done for this program, a similar air-ingression sensitivity study was done as part of a low-power/shutdown PRA, with results summarized here; that PRA study also analyzed a station blackout occurring during shutdown (refueling) conditions, but for the Grand Gulf plant, a BWR/6 with Mark III containment. These studies help quantify the amount of air that would have to enter the core region to have a significant impact on the severe accident scenario, and demonstrate that one effect, of air ingression is substantial enhancement of ruthenium release. These calculations also show that, while the core clad temperatures rise more quickly due to oxidation with air rather than steam, the core also degrades and relocates more quickly, so that no sustained, enhanced core heatup is predicted to occur with air ingression.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Goldmann, Andrew; Kalinich, Donald A.
2016-12-01
In this study, risk-significant pressurized-water reactor severe accident sequences are examined using MELCOR 1.8.5 to explore the range of fission product releases to the reactor containment building. Advances in the understanding of fission product release and transport behavior and severe accident progression are used to render best estimate analyses of selected accident sequences. Particular emphasis is placed on estimating the effects of high fuel burnup in contrast with low burnup on fission product releases to the containment. Supporting this emphasis, recent data available on fission product release from high-burnup (HBU) fuel from the French VERCOR project are used in thismore » study. The results of these analyses are treated as samples from a population of accident sequences in order to employ approximate order statistics characterization of the results. These trends and tendencies are then compared to the NUREG-1465 alternative source term prescription used today for regulatory applications. In general, greater differences are observed between the state-of-the-art calculations for either HBU or low-burnup (LBU) fuel and the NUREG-1465 containment release fractions than exist between HBU and LBU release fractions. Current analyses suggest that retention of fission products within the vessel and the reactor coolant system (RCS) are greater than contemplated in the NUREG-1465 prescription, and that, overall, release fractions to the containment are therefore lower across the board in the present analyses than suggested in NUREG-1465. The decreased volatility of Cs 2 MoO 4 compared to CsI or CsOH increases the predicted RCS retention of cesium, and as a result, cesium and iodine do not follow identical behaviors with respect to distribution among vessel, RCS, and containment. With respect to the regulatory alternative source term, greater differences are observed between the NUREG-1465 prescription and both HBU and LBU predictions than exist between HBU and LBU analyses. Additionally, current analyses suggest that the NUREG-1465 release fractions are conservative by about a factor of 2 in terms of release fractions and that release durations for in-vessel and late in-vessel release periods are in fact longer than the NUREG-1465 durations. It is currently planned that a subsequent report will further characterize these results using more refined statistical methods, permitting a more precise reformulation of the NUREG-1465 alternative source term for both LBU and HBU fuels, with the most important finding being that the NUREG-1465 formula appears to embody significant conservatism compared to current best-estimate analyses. ACKNOWLEDGEMENTS This work was supported by the United States Nuclear Regulatory Commission, Office of Nuclear Regulatory Research. The authors would like to thank Dr. Ian Gauld and Dr. Germina Ilas, of Oak Ridge National Laboratory, for their contributions to this work. In addition to development of core fission product inventory and decay heat information for use in MELCOR models, their insights related to fuel management practices and resulting effects on spatial distribution of fission products in the core was instrumental in completion of our work.« less
Analysis of typical WWER-1000 severe accident scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorokin, Yu.S.; Shchekoldin, V.V.; Borisov, L.N.
2004-07-01
At present in EDO 'Gidropress' there is a certain experience of performing the analyses of severe accidents of reactor plant with WWER with application of domestic and foreign codes. Important data were also obtained by the results of calculation modeling of integrated experiments with fuel assembly melting comprising a real fuel. Systematization and consideration of these data in development and assimilation of codes are extremely important in connection with large uncertainty still existing in understanding and adequate description of phenomenology of severe accidents. The presented report gives a comparison of analysis results of severe accidents of reactor plant with WWER-1000more » for two typical scenarios made by using American MELCOR code and the Russian RATEG/SVECHA/HEFEST code. The results of calculation modeling are compared using above codes with the data of experiment FPT1 with fuel assembly melting comprising a real fuel, which has been carried out at the facility Phebus (France). The obtained results are considered in the report from the viewpoint of: - adequacy of results of calculation modeling of separate phenomena during severe accidents of RP with WWER by using the above codes; - influence of uncertainties (degree of details of calculation models, choice of parameters of models etc.); - choice of those or other setup variables (options) in the used codes; - necessity of detailed modeling of processes and phenomena as applied to design justification of safety of RP with WWER. (authors)« less
Import Manipulate Plot RELAP5/MOD3 Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, K. R.
1999-10-05
XMGR5 was derived from an XY plotting tool called ACE/gr, which is copyrighted by Paul J. Turner and in the public domain. The interactive version of ACE/GR is xmgr, and includes a graphical interface to the X-windows system. Enhancements to xmgr have been developed which import, manipualate, and plot data from RELAP/MOD3, MELCOR, FRAPCON, and SINDA codes, and NRC databank files. capabilities, include two-phase property table lookup functions, an equation interpreter, arithmetic library functions, and units conversion. Plot titles, labels, legends, and narrative can be displayed using Latin or Cyrillic alphabets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Nathan; Faucett, Christopher; Haskin, Troy Christopher
Following the conclusion of the first phase of the crosswalk analysis, one of the key unanswered questions was whether or not the deviations found would persist during a partially recovered accident scenario, similar to the one that occurred in TMI - 2. In particular this analysis aims to compare the impact of core degradation morphology on quenching models inherent within the two codes and the coolability of debris during partially recovered accidents. A primary motivation for this study is the development of insights into how uncertainties in core damage progression models impact the ability to assess the potential for recoverymore » of a degraded core. These quench and core recovery models are of the most interest when there is a significant amount of core damage, but intact and degraded fuel still remain in the cor e region or the lower plenum. Accordingly this analysis presents a spectrum of partially recovered accident scenarios by varying both water injection timing and rate to highlight the impact of core degradation phenomena on recovered accident scenarios. This analysis uses the newly released MELCOR 2.2 rev. 966 5 and MAAP5, Version 5.04. These code versions, which incorporate a significant number of modifications that have been driven by analyses and forensic evidence obtained from the Fukushima - Daiichi reactor site.« less
NASA Astrophysics Data System (ADS)
Artnak, Edward Joseph, III
This work seeks to illustrate the potential benefits afforded by implementing aspects of fluid dynamics, especially the latest computational fluid dynamics (CFD) modeling approach, through numerical experimentation and the traditional discipline of physical experimentation to improve the calibration of the severe reactor accident analysis code, MELCOR, in one of several spent fuel pool (SFP) complete loss-ofcoolant accident (LOCA) scenarios. While the scope of experimental work performed by Sandia National Laboratories (SNL) extends well beyond that which is reasonably addressed by our allotted resources and computational time in accordance with initial project allocations to complete the report, these simulated case trials produced a significant array of supplementary high-fidelity solutions and hydraulic flow-field data in support of SNL research objectives. Results contained herein show FLUENT CFD model representations of a 9x9 BWR fuel assembly in conditions corresponding to a complete loss-of-coolant accident scenario. In addition to the CFD model developments, a MATLAB based controlvolume model was constructed to independently assess the 9x9 BWR fuel assembly under similar accident scenarios. The data produced from this work show that FLUENT CFD models are capable of resolving complex flow fields within a BWR fuel assembly in the realm of buoyancy-induced mass flow rates and that characteristic hydraulic parameters from such CFD simulations (or physical experiments) are reasonably employed in corresponding constitutive correlations for developing simplified numerical models of comparable solution accuracy.
Integral Reactor Containment Condensation Model and Experimental Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qiao; Corradini, Michael
This NEUP funded project, NEUP 12-3630, is for experimental, numerical and analytical studies on high-pressure steam condensation phenomena in a steel containment vessel connected to a water cooling tank, carried out at Oregon State University (OrSU) and the University of Wisconsin at Madison (UW-Madison). In the three years of investigation duration, following the original proposal, the planned tasks have been completed: (1) Performed a scaling study for the full pressure test facility applicable to the reference design for the condensation heat transfer process during design basis accidents (DBAs), modified the existing test facility to route the steady-state secondary steam flowmore » into the high pressure containment for controllable condensation tests, and extended the operations at negative gage pressure conditions (OrSU). (2) Conducted a series of DBA and quasi-steady experiments using the full pressure test facility to provide a reliable high pressure condensation database (OrSU). (3) Analyzed experimental data and evaluated condensation model for the experimental conditions, and predicted the prototypic containment performance under accidental conditions (UW-Madison). A film flow model was developed for the scaling analysis, and the results suggest that the 1/3 scaled test facility covers large portion of laminar film flow, leading to a lower average heat transfer coefficient comparing to the prototypic value. Although it is conservative in reactor safety analysis, the significant reduction of heat transfer coefficient (50%) could under estimate the prototypic condensation heat transfer rate, resulting in inaccurate prediction of the decay heat removal capability. Further investigation is thus needed to quantify the scaling distortion for safety analysis code validation. Experimental investigations were performed in the existing MASLWR test facility at OrST with minor modifications. A total of 13 containment condensation tests were conducted for pressure ranging from 4 to 21 bar with three different static inventories of non-condensable gas. Condensation and heat transfer rates were evaluated employing several methods, notably from measured temperature gradients in the HTP as well as measured condensate formation rates. A detailed mass and energy accounting was used to assess the various measurement methods and to support simplifying assumptions required for the analysis. Condensation heat fluxes and heat transfer coefficients are calculated and presented as a function of pressure to satisfy the objectives of this investigation. The major conclusions for those tests are summarized below: (1) In the steam blow-down tests, the initial condensation heat transfer process involves the heating-up of the containment heat transfer plate. An inverse heat conduction model was developed to capture the rapid transient transfer characteristics, and the analysis method is applicable to SMR safety analysis. (2) The average condensation heat transfer coefficients for different pressure conditions and non-condensable gas mass fractions were obtained from the integral test facility, through the measurements of the heat conduction rate across the containment heat transfer plate, and from the water condensation rates measurement based on the total energy balance equation. 15 (3) The test results using the measured HTP wall temperatures are considerably lower than popular condensation models would predict mainly due to the side wall conduction effects in the existing MASLWR integral test facility. The data revealed the detailed heat transfer characteristics of the model containment, important to the SMR safety analysis and the validation of associated evaluation model. However this approach, unlike separate effect tests, cannot isolate the condensation heat transfer coefficient over the containment wall, and therefore is not suitable for the assessment of the condensation heat transfer coefficient against system pressure and noncondensable gas mass fraction. (4) The average condensation heat transfer coefficients measured from the water condensation rates through energy balance analysis are appropriate, however, with considerable uncertainties due to the heat loss and temperature distribution on the containment wall. With the consideration of the side wall conduction effects, the results indicate that the measured heat transfer coefficients in the tests is about 20% lower than the prediction of Dehbi’s correlation, mainly due to the side wall conduction effects. The investigation also indicates an increase in the condensation heat transfer coefficient at high containment pressure conditions, but the uncertainties invoked with this method appear to be substantial. (5) Non-condensable gas in the tests has little effects on the condensation heat transfer at high elevation measurement ports. It does affect the bottom measurements near the water level position. The results suggest that the heavier non-condensable gas is accumulated in the lower portion of the containment due to stratification in the narrow containment space. The overall effects of the non-condensable gas on the heat transfer process should thus be negligible for tall containments of narrow condensation spaces in most SMR designs. Therefore, the previous correlations with noncondensable gas effects are not appropriate to those small SMR containments due to the very poor mixing of steam and non-condensable gas. The MELCOR simulation results agree with the experimental data reasonably well. However, it is observed that the MELCOR overpredicts the heat flux for all analyzed tests. The MELCOR predicts that the heat fluxes for CCT’s approximately range from 30 to 45 kW/m2 whereas the experimental data (averaged) ranges from about 25 to 40 kW/m2. This may be due to the limited availability of liquid film models included in MELCOR. Also, it is believed that due to complex test geometry, measured temperature gradients across the heat transfer plate may have been underestimated and thus the heat flux had been underestimated. The MELCOR model predicts a film thickness on the order of 100 microns, which agrees very well with film flow model developed in this study for scaling analysis. However, the expected differences in film thicknesses for near vacuum and near atmospheric test conditions are not significant. Further study on the behavior of condensate film is expected to refine the simulation results. Possible refinements include but are not limited to, the followings: CFD simulation focusing on the liquid film behavior and benchmarking with experimental analyses for simpler geometries. 16 1 INTRODUCTION This NEUP funded project, NEUP 12-3630, is for experimental, numerical and analytical studies on high-pressure steam condensation phenomena in a steel containment vessel connected to a water cooling tank, carried out at Oregon State University (OrSU) and the University of Wisconsin at Madison (UW-Madison). The experimental results are employed to validate the containment condensation model in reactor containment system safety analysis code for integral SMRs. Such a containment condensation model is important to demonstrate the adequate cooling. In the three years of investigation, following the original proposal, the following planned tasks have been completed: (1) Performed a scaling study for the full pressure test facility applicable to the reference design for the condensation heat transfer process during design basis accidents (DBAs), modified the existing test facility to route the steady-state secondary steam flow into the high pressure containment for controllable condensation tests, and extended the operations at negative gage pressure conditions (OrSU). (2) Conducted a series of DBA and quasi-steady experiments using the full pressure test facility to provide a reliable high pressure condensation database (OrSU). (3) Analyzed experimental data and evaluated condensation model for the experimental conditions, and predicted the prototypic containment performance under accidental conditions (UW-Madison). The results are applicable to integral Small Modular Reactor (SMR) designs, including NuScale, mPower, Westinghouse SMR, Holtec-160 and other integral reactors with small containments of relatively high pressures under accidental conditions. Testing has been conducted at the OrSU laboratory in the existing MASLWR (Multi-Application Small Light Water Reactor) integral test facility sponsored by the US Department of Energy. Its highpressure stainless steel containment model (~2 MPa) is scaled to the NuScale SMR currently under development at NuScale Power, Inc.. Minor modifications to the model containment have been made to control the non-condensable gas fraction and to utilize the secondary loop stable steam flow for condensation testing. UW-Madison has developed a containment condensation model, which leveraged previous validated containment heat transfer work carried out at UW-Madison, and extended the range of applicability of the model to integral SMR designs that utilize containment vessels of high heat transfer efficiencies. In this final report, the research background and literature survey are presented in Chapter 2 and 3, respectively. The test facility description and modifications are summarized in Chapter 4, and the scaling analysis is introduced in Chapter 5. The tests description, procedures, and data analysis are presented in Chapter 6, while the numerical modeling is presented in Chapter 7, followed by a conclusion section in Chapter 8.« less
Motorcycle waste heat energy harvesting
NASA Astrophysics Data System (ADS)
Schlichting, Alexander D.; Anton, Steven R.; Inman, Daniel J.
2008-03-01
Environmental concerns coupled with the depletion of fuel sources has led to research on ethanol, fuel cells, and even generating electricity from vibrations. Much of the research in these areas is stalling due to expensive or environmentally contaminating processes, however recent breakthroughs in materials and production has created a surge in research on waste heat energy harvesting devices. The thermoelectric generators (TEGs) used in waste heat energy harvesting are governed by the Thermoelectric, or Seebeck, effect, generating electricity from a temperature gradient. Some research to date has featured platforms such as heavy duty diesel trucks, model airplanes, and automobiles, attempting to either eliminate heavy batteries or the alternator. A motorcycle is another platform that possesses some very promising characteristics for waste heat energy harvesting, mainly because the exhaust pipes are exposed to significant amounts of air flow. A 1995 Kawasaki Ninja 250R was used for these trials. The module used in these experiments, the Melcor HT3-12-30, produced an average of 0.4694 W from an average temperature gradient of 48.73 °C. The mathematical model created from the Thermoelectric effect equation and the mean Seebeck coefficient displayed by the module produced an average error from the experimental data of 1.75%. Although the module proved insufficient to practically eliminate the alternator on a standard motorcycle, the temperature data gathered as well as the examination of a simple, yet accurate, model represent significant steps in the process of creating a TEG capable of doing so.
VICTORIA: A mechanistic model for radionuclide behavior in the reactor coolant system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaperow, J.H.; Bixler, N.E.
1996-12-31
VICTORIA is the U.S. Nuclear Regulatory Commission`s (NRC`s) mechanistic, best-estimate code for analysis of fission product release from the core and subsequent transport in the reactor vessel and reactor coolant system. VICTORIA requires thermal-hydraulic data (i.e., temperatures, pressures, and velocities) as input. In the past, these data have been taken from the results of calculations from thermal-hydraulic codes such as SCDAP/RELAP5, MELCOR, and MAAP. Validation and assessment of VICTORIA 1.0 have been completed. An independent peer review of VICTORIA, directed by Brookhaven National Laboratory and supported by experts in the areas of fuel release, fission product chemistry, and aerosol physics,more » has been undertaken. This peer review, which will independently assess the code`s capabilities, is nearing completion with the peer review committee`s final report expected in Dec 1996. A limited amount of additional development is expected as a result of the peer review. Following this additional development, the NRC plans to release VICTORIA 1.1 and an updated and improved code manual. Future plans mainly involve use of the code for plant calculations to investigate specific safety issues as they arise. Also, the code will continue to be used in support of the Phebus experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew
'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, Scott; Bixler, Nathan E.; McFadden, Katherine Letizia
In 1973 the U.S. Environmental Protection Agency (EPA) developed SecPop to calculate population estimates to support a study on air quality. The Nuclear Regulatory Commission (NRC) adopted this program to support siting reviews for nuclear power plant construction and license applications. Currently SecPop is used to prepare site data input files for offsite consequence calculations with the MELCOR Accident Consequence Code System (MACCS). SecPop enables the use of site-specific population, land use, and economic data for a polar grid defined by the user. Updated versions of SecPop have been released to use U.S. decennial census population data. SECPOP90 was releasedmore » in 1997 to use 1990 population and economic data. SECPOP2000 was released in 2003 to use 2000 population data and 1997 economic data. This report describes the current code version, SecPop version 4.3, which uses 2010 population data and both 2007 and 2012 economic data. It is also compatible with 2000 census and 2002 economic data. At the time of this writing, the current version of SecPop is 4.3.0, and that version is described herein. This report contains guidance for the installation and use of the code as well as a description of the theory, models, and algorithms involved. This report contains appendices which describe the development of the 2010 census file, 2007 county file, and 2012 county file. Finally, an appendix is included that describes the validation assessments performed.« less
Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew; ...
2016-09-23
'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less
NASA Astrophysics Data System (ADS)
Craig, Norman C.
2015-06-01
The temperature dependence of self-assembled, cell-like dispersions of phospholipids is investigated with Raman spectroscopy in the biochemistry laboratory. Vibrational modes in the hydrocarbon interiors of phospholipid bilayers are strongly Raman active, whereas the vibrations of the polar head groups and the water matrix have little Raman activity. From Raman spectra increases in fluidity of the hydrocarbon chains can be monitored with intensity changes as a function of temperature in the CH-stretching region. The experiment uses detection of scattered 1064-nm laser light (Nicolet NXR module) by a Fourier transform infrared spectrometer (Nicolet 6700). A thermoelectric heater-cooler device (Melcor) gives convenient temperature control from 5 to 95°C for samples in melting point capillaries. Use of deuterium oxide instead of water as the matrix avoids some absorption of the exciting laser light and interference with intensity observations in the CH-stretching region. Phospholipids studied range from dimyristoylphosphotidyl choline (C14, transition T = 24°C) to dibehenoylphosphotidyl choline (C22, transition T = 74°C).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Ross, Kyle W.; Smith, James Dean
2010-04-01
The Oak Ridge National Laboratory computer code, ORIGEN2.2 (CCC-371, 2002), was used to obtain the elemental composition of irradiated low-enriched uranium (LEU)/mixed-oxide (MOX) pressurized-water reactor fuel assemblies. Described in this report are the input parameters for the ORIGEN2.2 calculations. The rationale for performing the ORIGEN2.2 calculation was to generate inventories to be used to populate MELCOR radionuclide classes. Therefore the ORIGEN2.2 output was subsequently manipulated. The procedures performed in this data reduction process are also described herein. A listing of the ORIGEN2.2 input deck for two-cycle MOX is provided in the appendix. The final output from this data reduction processmore » was three tables containing the radionuclide inventories for LEU/MOX in elemental form. Masses, thermal powers, and activities were reported for each category.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobromir Panayotov; Andrew Grief; Brad J. Merrill
'Fusion for Energy' (F4E) develops designs and implements the European Test Blanket Systems (TBS) in ITER - Helium-Cooled Lithium-Lead (HCLL) and Helium-Cooled Pebble-Bed (HCPB). Safety demonstration is an essential element for the integration of TBS in ITER and accident analyses are one of its critical segments. A systematic approach to the accident analyses had been acquired under the F4E contract on TBS safety analyses. F4E technical requirements and AMEC and INL efforts resulted in the development of a comprehensive methodology for fusion breeding blanket accident analyses. It addresses the specificity of the breeding blankets design, materials and phenomena and atmore » the same time is consistent with the one already applied to ITER accident analyses. Methodology consists of several phases. At first the reference scenarios are selected on the base of FMEA studies. In the second place elaboration of the accident analyses specifications we use phenomena identification and ranking tables to identify the requirements to be met by the code(s) and TBS models. Thus the limitations of the codes are identified and possible solutions to be built into the models are proposed. These include among others the loose coupling of different codes or code versions in order to simulate multi-fluid flows and phenomena. The code selection and issue of the accident analyses specifications conclude this second step. Furthermore the breeding blanket and ancillary systems models are built on. In this work challenges met and solutions used in the development of both MELCOR and RELAP5 codes models of HCLL and HCPB TBSs will be shared. To continue the developed models are qualified by comparison with finite elements analyses, by code to code comparison and sensitivity studies. Finally, the qualified models are used for the execution of the accident analyses of specific scenario. When possible the methodology phases will be illustrated in the paper by limited number of tables and figures. Description of each phase and its results in detail as well the methodology applications to EU HCLL and HCPB TBSs will be published in separate papers. The developed methodology is applicable to accident analyses of other TBSs to be tested in ITER and as well to DEMO breeding blankets.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manera, Annalisa; Corradini, Michael; Petrov, Victor
This project has been focused on the experimental and numerical investigations of the water-cooled and air-cooled Reactor Cavity Cooling System (RCCS) designs. At this aim, we have leveraged an existing experimental facility at the University of Wisconsin-Madison (UW), and we have designed and built a separate effect test facility at the University of Michigan. The experimental facility at UW has underwent several upgrades, including the installation of advanced instrumentation (i.e. wire-mesh sensors) built at the University of Michigan. These provides highresolution time-resolved measurements of the void-fraction distribution in the risers of the water-cooled RCCS facility. A phenomenological model has beenmore » developed to assess the water cooled RCCS system stability and determine the root cause behind the oscillatory behavior that occurs under normal two-phase operation. Testing under various perturbations to the water-cooled RCCS facility have resulted in changes in the stability of the integral system. In particular, the effects on stability of inlet orifices, water tank volume have and system pressure been investigated. MELCOR was used as a predictive tool when performing inlet orificing tests and was able to capture the Density Wave Oscillations (DWOs) that occurred upon reaching saturation in the risers. The experimental and numerical results have then been used to provide RCCS design recommendations. The experimental facility built at the University of Michigan was aimed at the investigation of mixing in the upper plenum of the air-cooled RCCS design. The facility has been equipped with state-of-theart high-resolution instrumentation to achieve so-called CFD grade experiments, that can be used for the validation of Computational Fluid Dynanmics (CFD) models, both RANS (Reynold-Averaged) and LES (Large Eddy Simulations). The effect of risers penetration in the upper plenum has been investigated as well.« less
Analysis of PANDA Passive Containment Cooling Steady-State Tests with the Spectra Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stempniewicz, Marek M
2000-07-15
Results of post test simulation of the PANDA passive containment cooling (PCC) steady-state tests (S-series tests), performed at the PANDA facility at the Paul Scherrer Institute, Switzerland, are presented. The simulation has been performed using the computer code SPECTRA, a thermal-hydraulic code, designed specifically for analyzing containment behavior of nuclear power plants.Results of the present calculations are compared to the measurement data as well as the results obtained earlier with the codes MELCOR, TRAC-BF1, and TRACG. The calculated PCC efficiencies are somewhat lower than the measured values. Similar underestimation of PCC efficiencies had been obtained in the past, with themore » other computer codes. To explain this difference, it is postulated that condensate coming into the tubes forms a stream of liquid in one or two tubes, leaving most of the tubes unaffected. The condensate entering the water box is assumed to fall down in the form of droplets. With these assumptions, the results calculated with SPECTRA are close to the experimental data.It is concluded that the SPECTRA code is a suitable tool for analyzing containments of advanced reactors, equipped with passive containment cooling systems.« less
Analysis of the FeCrAl Accident Tolerant Fuel Concept Benefits during BWR Station Blackout Accidents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R
2015-01-01
Iron-chromium-aluminum (FeCrAl) alloys are being considered for fuel concepts with enhanced accident tolerance. FeCrAl alloys have very slow oxidation kinetics and good strength at high temperatures. FeCrAl could be used for fuel cladding in light water reactors and/or as channel box material in boiling water reactors (BWRs). To estimate the potential safety gains afforded by the FeCrAl concept, the MELCOR code was used to analyze a range of postulated station blackout severe accident scenarios in a BWR/4 reactor employing FeCrAl. The simulations utilize the most recently known thermophysical properties and oxidation kinetics for FeCrAl. Overall, when compared to the traditionalmore » Zircaloy-based cladding and channel box, the FeCrAl concept provides a few extra hours of time for operators to take mitigating actions and/or for evacuations to take place. A coolable core geometry is retained longer, enhancing the ability to stabilize an accident. Finally, due to the slower oxidation kinetics, substantially less hydrogen is generated, and the generation is delayed in time. This decreases the amount of non-condensable gases in containment and the potential for deflagrations to inhibit the accident response.« less
Data Analysis Approaches for the Risk-Informed Safety Margins Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Alfonsi, Andrea; Maljovec, Daniel P.
2016-09-01
In the past decades, several numerical simulation codes have been employed to simulate accident dynamics (e.g., RELAP5-3D, RELAP-7, MELCOR, MAAP). In order to evaluate the impact of uncertainties into accident dynamics, several stochastic methodologies have been coupled with these codes. These stochastic methods range from classical Monte-Carlo and Latin Hypercube sampling to stochastic polynomial methods. Similar approaches have been introduced into the risk and safety community where stochastic methods (such as RAVEN, ADAPT, MCDET, ADS) have been coupled with safety analysis codes in order to evaluate the safety impact of timing and sequencing of events. These approaches are usually calledmore » Dynamic PRA or simulation-based PRA methods. These uncertainties and safety methods usually generate a large number of simulation runs (database storage may be on the order of gigabytes or higher). The scope of this paper is to present a broad overview of methods and algorithms that can be used to analyze and extract information from large data sets containing time dependent data. In this context, “extracting information” means constructing input-output correlations, finding commonalities, and identifying outliers. Some of the algorithms presented here have been developed or are under development within the RAVEN statistical framework.« less
Investigation on the Core Bypass Flow in a Very High Temperature Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, Yassin
2013-10-22
Uncertainties associated with the core bypass flow are some of the key issues that directly influence the coolant mass flow distribution and magnitude, and thus the operational core temperature profiles, in the very high-temperature reactor (VHTR). Designers will attempt to configure the core geometry so the core cooling flow rate magnitude and distribution conform to the design values. The objective of this project is to study the bypass flow both experimentally and computationally. Researchers will develop experimental data using state-of-the-art particle image velocimetry in a small test facility. The team will attempt to obtain full field temperature distribution using racksmore » of thermocouples. The experimental data are intended to benchmark computational fluid dynamics (CFD) codes by providing detailed information. These experimental data are urgently needed for validation of the CFD codes. The following are the project tasks: • Construct a small-scale bench-top experiment to resemble the bypass flow between the graphite blocks, varying parameters to address their impact on bypass flow. Wall roughness of the graphite block walls, spacing between the blocks, and temperature of the blocks are some of the parameters to be tested. • Perform CFD to evaluate pre- and post-test calculations and turbulence models, including sensitivity studies to achieve high accuracy. • Develop the state-of-the art large eddy simulation (LES) using appropriate subgrid modeling. • Develop models to be used in systems thermal hydraulics codes to account and estimate the bypass flows. These computer programs include, among others, RELAP3D, MELCOR, GAMMA, and GAS-NET. Actual core bypass flow rate may vary considerably from the design value. Although the uncertainty of the bypass flow rate is not known, some sources have stated that the bypass flow rates in the Fort St. Vrain reactor were between 8 and 25 percent of the total reactor mass flow rate. If bypass flow rates are on the high side, the quantity of cooling flow through the core may be considerably less than the nominal design value, causing some regions of the core to operate at temperatures in excess of the design values. These effects are postulated to lead to localized hot regions in the core that must be considered when evaluating the VHTR operational and accident scenarios.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunsman, David Marvin; Aldemir, Tunc; Rutt, Benjamin
2008-05-01
This LDRD project has produced a tool that makes probabilistic risk assessments (PRAs) of nuclear reactors - analyses which are very resource intensive - more efficient. PRAs of nuclear reactors are being increasingly relied on by the United States Nuclear Regulatory Commission (U.S.N.R.C.) for licensing decisions for current and advanced reactors. Yet, PRAs are produced much as they were 20 years ago. The work here applied a modern systems analysis technique to the accident progression analysis portion of the PRA; the technique was a system-independent multi-task computer driver routine. Initially, the objective of the work was to fuse the accidentmore » progression event tree (APET) portion of a PRA to the dynamic system doctor (DSD) created by Ohio State University. Instead, during the initial efforts, it was found that the DSD could be linked directly to a detailed accident progression phenomenological simulation code - the type on which APET construction and analysis relies, albeit indirectly - and thereby directly create and analyze the APET. The expanded DSD computational architecture and infrastructure that was created during this effort is called ADAPT (Analysis of Dynamic Accident Progression Trees). ADAPT is a system software infrastructure that supports execution and analysis of multiple dynamic event-tree simulations on distributed environments. A simulator abstraction layer was developed, and a generic driver was implemented for executing simulators on a distributed environment. As a demonstration of the use of the methodological tool, ADAPT was applied to quantify the likelihood of competing accident progression pathways occurring for a particular accident scenario in a particular reactor type using MELCOR, an integrated severe accident analysis code developed at Sandia. (ADAPT was intentionally created with flexibility, however, and is not limited to interacting with only one code. With minor coding changes to input files, ADAPT can be linked to other such codes.) The results of this demonstration indicate that the approach can significantly reduce the resources required for Level 2 PRAs. From the phenomenological viewpoint, ADAPT can also treat the associated epistemic and aleatory uncertainties. This methodology can also be used for analyses of other complex systems. Any complex system can be analyzed using ADAPT if the workings of that system can be displayed as an event tree, there is a computer code that simulates how those events could progress, and that simulator code has switches to turn on and off system events, phenomena, etc. Using and applying ADAPT to particular problems is not human independent. While the human resources for the creation and analysis of the accident progression are significantly decreased, knowledgeable analysts are still necessary for a given project to apply ADAPT successfully. This research and development effort has met its original goals and then exceeded them.« less
Recent plant studies using Victoria 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
BIXLER,NATHAN E.; GASSER,RONALD D.
2000-03-08
VICTORIA 2.0 is a mechanistic computer code designed to analyze fission product behavior within the reactor coolant system (RCS) during a severe nuclear reactor accident. It provides detailed predictions of the release of radioactive and nonradioactive materials from the reactor core and transport and deposition of these materials within the RCS and secondary circuits. These predictions account for the chemical and aerosol processes that affect radionuclide behavior. VICTORIA 2.0 was released in early 1999; a new version VICTORIA 2.1, is now under development. The largest improvements in VICTORIA 2.1 are connected with the thermochemical database, which is being revised andmore » expanded following the recommendations of a peer review. Three risk-significant severe accident sequences have recently been investigated using the VICTORIA 2.0 code. The focus here is on how various chemistry options affect the predictions. Additionally, the VICTORIA predictions are compared with ones made using the MELCOR code. The three sequences are a station blackout in a GE BWR and steam generator tube rupture (SGTR) and pump-seal LOCA sequences in a 3-loop Westinghouse PWR. These sequences cover a range of system pressures, from fully depressurized to full system pressure. The chief results of this study are the fission product fractions that are retained in the core, RCS, secondary, and containment and the fractions that are released into the environment.« less
ITER Port Interspace Pressure Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbajo, Juan J; Van Hove, Walter A
The ITER Vacuum Vessel (VV) is equipped with 54 access ports. Each of these ports has an opening in the bioshield that communicates with a dedicated port cell. During Tokamak operation, the bioshield opening must be closed with a concrete plug to shield the radiation coming from the plasma. This port plug separates the port cell into a Port Interspace (between VV closure lid and Port Plug) on the inner side and the Port Cell on the outer side. This paper presents calculations of pressures and temperatures in the ITER (Ref. 1) Port Interspace after a double-ended guillotine break (DEGB)more » of a pipe of the Tokamak Cooling Water System (TCWS) with high temperature water. It is assumed that this DEGB occurs during the worst possible conditions, which are during water baking operation, with water at a temperature of 523 K (250 C) and at a pressure of 4.4 MPa. These conditions are more severe than during normal Tokamak operation, with the water at 398 K (125 C) and 2 MPa. Two computer codes are employed in these calculations: RELAP5-3D Version 4.2.1 (Ref. 2) to calculate the blowdown releases from the pipe break, and MELCOR, Version 1.8.6 (Ref. 3) to calculate the pressures and temperatures in the Port Interspace. A sensitivity study has been performed to optimize some flow areas.« less
Analysis of the influence of the heat transfer phenomena on the late phase of the ThAI Iod-12 test
NASA Astrophysics Data System (ADS)
Gonfiotti, B.; Paci, S.
2014-11-01
Iodine is one of the major contributors to the source term during a severe accident in a Nuclear Power Plant for its volatility and high radiological consequences. Therefore, large efforts have been made to describe the Iodine behaviour during an accident, especially in the containment system. Due to the lack of experimental data, in the last years many attempts were carried out to fill the gaps on the knowledge of Iodine behaviour. In this framework, two tests (ThAI Iod-11 and Iod-12) were carried out inside a multi-compartment steel vessel. A quite complex transient characterizes these two tests; therefore they are also suitable for thermal- hydraulic benchmarks. The two tests were originally released for a benchmark exercise during the SARNET2 EU Project. At the end of this benchmark a report covering the main findings was issued, stating that the common codes employed in SA studies were able to simulate the tests but with large discrepancies. The present work is then related to the application of the new versions of ASTEC and MELCOR codes with the aim of carry out a new code-to-code comparison vs. ThAI Iod-12 experimental data, focusing on the influence of the heat exchanges with the outer environment, which seems to be one of the most challenging issues to cope with.
A Comparative of business process modelling techniques
NASA Astrophysics Data System (ADS)
Tangkawarow, I. R. H. T.; Waworuntu, J.
2016-04-01
In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.
An Introduction to Markov Modeling: Concepts and Uses
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Lau, Sonie (Technical Monitor)
1998-01-01
Kharkov modeling is a modeling technique that is widely useful for dependability analysis of complex fault tolerant systems. It is very flexible in the type of systems and system behavior it can model. It is not, however, the most appropriate modeling technique for every modeling situation. The first task in obtaining a reliability or availability estimate for a system is selecting which modeling technique is most appropriate to the situation at hand. A person performing a dependability analysis must confront the question: is Kharkov modeling most appropriate to the system under consideration, or should another technique be used instead? The need to answer this gives rise to other more basic questions regarding Kharkov modeling: what are the capabilities and limitations of Kharkov modeling as a modeling technique? How does it relate to other modeling techniques? What kind of system behavior can it model? What kinds of software tools are available for performing dependability analyses with Kharkov modeling techniques? These questions and others will be addressed in this tutorial.
Time series forecasting using ERNN and QR based on Bayesian model averaging
NASA Astrophysics Data System (ADS)
Pwasong, Augustine; Sathasivam, Saratha
2017-08-01
The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems
NASA Astrophysics Data System (ADS)
Yang, Le; Wang, Shuo; Feng, Jianghua
2017-11-01
Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
Selected Logistics Models and Techniques.
1984-09-01
TI - 59 Programmable Calculator LCC...Program 27 TI - 59 Programmable Calculator LCC Model 30 Unmanned Spacecraft Cost Model 31 iv I: TABLE OF CONTENTS (CONT’D) (Subject Index) LOGISTICS...34"" - % - "° > - " ° .° - " .’ > -% > ]*° - LOGISTICS ANALYSIS MODEL/TECHNIQUE DATA MODEL/TECHNIQUE NAME: TI - 59 Programmable Calculator LCC Model TYPE MODEL: Cost Estimating DEVELOPED BY:
Load Modeling and Calibration Techniques for Power System Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, Forrest S.; Mayhorn, Ebony T.; Elizondo, Marcelo A.
2011-09-23
Load modeling is the most uncertain area in power system simulations. Having an accurate load model is important for power system planning and operation. Here, a review of load modeling and calibration techniques is given. This paper is not comprehensive, but covers some of the techniques most commonly found in the literature. The advantages and disadvantages of each technique are outlined.
An improved switching converter model using discrete and average techniques
NASA Technical Reports Server (NTRS)
Shortt, D. J.; Lee, F. C.
1982-01-01
The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.
Exploring the Components of Dynamic Modeling Techniques
ERIC Educational Resources Information Center
Turnitsa, Charles Daniel
2012-01-01
Upon defining the terms modeling and simulation, it becomes apparent that there is a wide variety of different models, using different techniques, appropriate for different levels of representation for any one system to be modeled. Selecting an appropriate conceptual modeling technique from those available is an open question for the practitioner.…
van der Ploeg, Tjeerd; Austin, Peter C; Steyerberg, Ewout W
2014-12-22
Modern modelling techniques may potentially provide more accurate predictions of binary outcomes than classical techniques. We aimed to study the predictive performance of different modelling techniques in relation to the effective sample size ("data hungriness"). We performed simulation studies based on three clinical cohorts: 1282 patients with head and neck cancer (with 46.9% 5 year survival), 1731 patients with traumatic brain injury (22.3% 6 month mortality) and 3181 patients with minor head injury (7.6% with CT scan abnormalities). We compared three relatively modern modelling techniques: support vector machines (SVM), neural nets (NN), and random forests (RF) and two classical techniques: logistic regression (LR) and classification and regression trees (CART). We created three large artificial databases with 20 fold, 10 fold and 6 fold replication of subjects, where we generated dichotomous outcomes according to different underlying models. We applied each modelling technique to increasingly larger development parts (100 repetitions). The area under the ROC-curve (AUC) indicated the performance of each model in the development part and in an independent validation part. Data hungriness was defined by plateauing of AUC and small optimism (difference between the mean apparent AUC and the mean validated AUC <0.01). We found that a stable AUC was reached by LR at approximately 20 to 50 events per variable, followed by CART, SVM, NN and RF models. Optimism decreased with increasing sample sizes and the same ranking of techniques. The RF, SVM and NN models showed instability and a high optimism even with >200 events per variable. Modern modelling techniques such as SVM, NN and RF may need over 10 times as many events per variable to achieve a stable AUC and a small optimism than classical modelling techniques such as LR. This implies that such modern techniques should only be used in medical prediction problems if very large data sets are available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bixler, Nathan E.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie
2014-02-01
This paper describes the convergence of MELCOR Accident Consequence Code System, Version 2 (MACCS2) probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. The consequence metrics evaluated are individual latent-cancer fatality (LCF) risk and individual early fatality risk. Consequence results are presented as conditional risk (i.e., assuming the accident occurs, risk per event) to individuals of the public as a result of the accident. In order to verify convergence for this uncertainty analysis, as recommended by the Nuclear Regulatory Commission’s Advisorymore » Committee on Reactor Safeguards, a ‘high’ source term from the original population of Monte Carlo runs has been selected to be used for: (1) a study of the distribution of consequence results stemming solely from epistemic uncertainty in the MACCS2 parameters (i.e., separating the effect from the source term uncertainty), and (2) a comparison between Simple Random Sampling (SRS) and Latin Hypercube Sampling (LHS) in order to validate the original results obtained with LHS. Three replicates (each using a different random seed) of size 1,000 each using LHS and another set of three replicates of size 1,000 using SRS are analyzed. The results show that the LCF risk results are well converged with either LHS or SRS sampling. The early fatality risk results are less well converged at radial distances beyond 2 miles, and this is expected due to the sparse data (predominance of “zero” results).« less
Automated Predictive Big Data Analytics Using Ontology Based Semantics.
Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A
2015-10-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.
Automated Predictive Big Data Analytics Using Ontology Based Semantics
Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.
2017-01-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954
Techniques for forced response involving discrete nonlinearities. I - Theory. II - Applications
NASA Astrophysics Data System (ADS)
Avitabile, Peter; Callahan, John O.
Several new techniques developed for the forced response analysis of systems containing discrete nonlinear connection elements are presented and compared to the traditional methods. In particular, the techniques examined are the Equivalent Reduced Model Technique (ERMT), Modal Modification Response Technique (MMRT), and Component Element Method (CEM). The general theory of the techniques is presented, and applications are discussed with particular reference to the beam nonlinear system model using ERMT, MMRT, and CEM; frame nonlinear response using the three techniques; and comparison of the results obtained by using the ERMT, MMRT, and CEM models.
Evaluation of Mesoscale Model Phenomenological Verification Techniques
NASA Technical Reports Server (NTRS)
Lambert, Winifred
2006-01-01
Forecasters at the Spaceflight Meteorology Group, 45th Weather Squadron, and National Weather Service in Melbourne, FL use mesoscale numerical weather prediction model output in creating their operational forecasts. These models aid in forecasting weather phenomena that could compromise the safety of launch, landing, and daily ground operations and must produce reasonable weather forecasts in order for their output to be useful in operations. Considering the importance of model forecasts to operations, their accuracy in forecasting critical weather phenomena must be verified to determine their usefulness. The currently-used traditional verification techniques involve an objective point-by-point comparison of model output and observations valid at the same time and location. The resulting statistics can unfairly penalize high-resolution models that make realistic forecasts of a certain phenomena, but are offset from the observations in small time and/or space increments. Manual subjective verification can provide a more valid representation of model performance, but is time-consuming and prone to personal biases. An objective technique that verifies specific meteorological phenomena, much in the way a human would in a subjective evaluation, would likely produce a more realistic assessment of model performance. Such techniques are being developed in the research community. The Applied Meteorology Unit (AMU) was tasked to conduct a literature search to identify phenomenological verification techniques being developed, determine if any are ready to use operationally, and outline the steps needed to implement any operationally-ready techniques into the Advanced Weather Information Processing System (AWIPS). The AMU conducted a search of all literature on the topic of phenomenological-based mesoscale model verification techniques and found 10 different techniques in various stages of development. Six of the techniques were developed to verify precipitation forecasts, one to verify sea breeze forecasts, and three were capable of verifying several phenomena. The AMU also determined the feasibility of transitioning each technique into operations and rated the operational capability of each technique on a subjective 1-10 scale: (1) 1 indicates that the technique is only in the initial stages of development, (2) 2-5 indicates that the technique is still undergoing modifications and is not ready for operations, (3) 6-8 indicates a higher probability of integrating the technique into AWIPS with code modifications, and (4) 9-10 indicates that the technique was created for AWIPS and is ready for implementation. Eight of the techniques were assigned a rating of 5 or below. The other two received ratings of 6 and 7, and none of the techniques a rating of 9-10. At the current time, there are no phenomenological model verification techniques ready for operational use. However, several of the techniques described in this report may become viable techniques in the future and should be monitored for updates in the literature. The desire to use a phenomenological verification technique is widespread in the modeling community, and it is likely that other techniques besides those described herein are being developed, but the work has not yet been published. Therefore, the AMIU recommends that the literature continue to be monitored for updates to the techniques described in this report and for new techniques being developed whose results have not yet been published. 111
Musculoskeletal modelling in dogs: challenges and future perspectives.
Dries, Billy; Jonkers, Ilse; Dingemanse, Walter; Vanwanseele, Benedicte; Vander Sloten, Jos; van Bree, Henri; Gielen, Ingrid
2016-05-18
Musculoskeletal models have proven to be a valuable tool in human orthopaedics research. Recently, veterinary research started taking an interest in the computer modelling approach to understand the forces acting upon the canine musculoskeletal system. While many of the methods employed in human musculoskeletal models can applied to canine musculoskeletal models, not all techniques are applicable. This review summarizes the important parameters necessary for modelling, as well as the techniques employed in human musculoskeletal models and the limitations in transferring techniques to canine modelling research. The major challenges in future canine modelling research are likely to centre around devising alternative techniques for obtaining maximal voluntary contractions, as well as finding scaling factors to adapt a generalized canine musculoskeletal model to represent specific breeds and subjects.
The analytical representation of viscoelastic material properties using optimization techniques
NASA Technical Reports Server (NTRS)
Hill, S. A.
1993-01-01
This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.
Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques
ERIC Educational Resources Information Center
Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.
2010-01-01
State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…
I-NERI Quarterly Technical Report (April 1 to June 30, 2005)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang Oh; Prof. Hee Cheon NO; Prof. John Lee
2005-06-01
The objective of this Korean/United States/laboratory/university collaboration is to develop new advanced computational methods for safety analysis codes for very-high-temperature gas-cooled reactors (VHTGRs) and numerical and experimental validation of these computer codes. This study consists of five tasks for FY-03: (1) development of computational methods for the VHTGR, (2) theoretical modification of aforementioned computer codes for molecular diffusion (RELAP5/ATHENA) and modeling CO and CO2 equilibrium (MELCOR), (3) development of a state-of-the-art methodology for VHTGR neutronic analysis and calculation of accurate power distributions and decay heat deposition rates, (4) reactor cavity cooling system experiment, and (5) graphite oxidation experiment. Second quartermore » of Year 3: (A) Prof. NO and Kim continued Task 1. As a further plant application of GAMMA code, we conducted two analyses: IAEA GT-MHR benchmark calculation for LPCC and air ingress analysis for PMR 600MWt. The GAMMA code shows comparable peak fuel temperature trend to those of other country codes. The analysis results for air ingress show much different trend from that of previous PBR analysis: later onset of natural circulation and less significant rise in graphite temperature. (B) Prof. Park continued Task 2. We have designed new separate effect test device having same heat transfer area and different diameter and total number of U-bands of air cooling pipe. New design has smaller pressure drop in the air cooling pipe than the previous one as designed with larger diameter and less number of U-bands. With the device, additional experiments have been performed to obtain temperature distributions of the water tank, the surface and the center of cooling pipe on axis. The results will be used to optimize the design of SNU-RCCS. (C) Prof. NO continued Task 3. The experimental work of air ingress is going on without any concern: With nuclear graphite IG-110, various kinetic parameters and reaction rates for the C/CO2 reaction were measured. Then, the rates of C/CO2 reaction were compared to the ones of C/O2 reaction. The rate equation for C/CO2 has been developed. (D) INL added models to RELAP5/ATHENA to cacilate the chemical reactions in a VHTR during an air ingress accident. Limited testing of the models indicate that they are calculating a correct special distribution in gas compositions. (E) INL benchmarked NACOK natural circulation data. (F) Professor Lee et al at the University of Michigan (UM) Task 5. The funding was received from the DOE Richland Office at the end of May and the subcontract paperwork was delivered to the UM on the sixth of June. The objective of this task is to develop a state of the art neutronics model for determining power distributions and decay heat deposition rates in a VHTGR core. Our effort during the reporting period covered reactor physics analysis of coated particles and coupled nuclear-thermal-hydraulic (TH) calculations, together with initial calculations for decay heat deposition rates in the core.« less
Modeling of ETL-Processes and Processed Information in Clinical Data Warehousing.
Tute, Erik; Steiner, Jochen
2018-01-01
Literature describes a big potential for reuse of clinical patient data. A clinical data warehouse (CDWH) is a means for that. To support management and maintenance of processes extracting, transforming and loading (ETL) data into CDWHs as well as to ease reuse of metadata between regular IT-management, CDWH and secondary data users by providing a modeling approach. Expert survey and literature review to find requirements and existing modeling techniques. An ETL-modeling-technique was developed extending existing modeling techniques. Evaluation by exemplarily modeling existing ETL-process and a second expert survey. Nine experts participated in the first survey. Literature review yielded 15 included publications. Six existing modeling techniques were identified. A modeling technique extending 3LGM2 and combining it with openEHR information models was developed and evaluated. Seven experts participated in the evaluation. The developed approach can help in management and maintenance of ETL-processes and could serve as interface between regular IT-management, CDWH and secondary data users.
Htun, Tha Pyai; Lim, Peng Im; Ho-Lim, Sarah
2015-01-01
Objectives The aim of this study was to examine the relationships among maternal and infant characteristics, breastfeeding techniques, and exclusive breastfeeding initiation in different modes of birth using structural equation modeling approaches. Methods We examined a hypothetical model based on integrating concepts of a breastfeeding decision-making model, a breastfeeding initiation model, and a social cognitive theory among 952 mother-infant dyads. The LATCH breastfeeding assessment tool was used to evaluate breastfeeding techniques and two infant feeding categories were used (exclusive and non-exclusive breastfeeding). Results Structural equation models (SEM) showed that multiparity was significantly positively associated with breastfeeding techniques and the jaundice of an infant was significantly negatively related to exclusive breastfeeding initiation. A multigroup analysis in the SEM showed no difference between the caesarean section and vaginal delivery groups estimates of breastfeeding techniques on exclusive breastfeeding initiation. Breastfeeding techniques were significantly positively associated with exclusive breastfeeding initiation in the entire sample and in the vaginal deliveries group. However, breastfeeding techniques were not significantly associated with exclusive breastfeeding initiation in the cesarean section group. Maternal age, maternal race, gestations, birth weight of infant, and postnatal complications had no significant impacts on breastfeeding techniques or exclusive breastfeeding initiation in our study. Overall, the models fitted the data satisfactorily (GFI = 0.979–0.987; AGFI = 0.951–0.962; IFI = 0.958–0.962; CFI = 0.955–0.960, and RMSEA = 0.029–0.034). Conclusions Multiparity and jaundice of an infant were found to affect breastfeeding technique and exclusive breastfeeding initiation respectively. Breastfeeding technique was related to exclusive breastfeeding initiation according to the mode of birth. This relationship implies the importance of early effective interventions among first-time mothers with jaundice infants in improving breastfeeding techniques and promoting exclusive breastfeeding initiation. PMID:26566028
Virtual 3d City Modeling: Techniques and Applications
NASA Astrophysics Data System (ADS)
Singh, S. P.; Jain, K.; Mandla, V. R.
2013-08-01
3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3-D City model is a very useful for various kinds of applications such as for planning in Navigation, Tourism, Disasters Management, Transportations, Municipality, Urban Environmental Managements and Real-estate industry. So the Construction of Virtual 3-D city models is a most interesting research topic in recent years.
ERIC Educational Resources Information Center
Storer, I. J.; Campbell, R. I.
2012-01-01
Industrial Designers need to understand and command a number of modelling techniques to communicate their ideas to themselves and others. Verbal explanations, sketches, engineering drawings, computer aided design (CAD) models and physical prototypes are the most commonly used communication techniques. Within design, unlike some disciplines,…
NASA Technical Reports Server (NTRS)
Dieudonne, J. E.
1978-01-01
A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.
NASA Astrophysics Data System (ADS)
Lei, Li
1999-07-01
In this study the researcher develops and presents a new model, founded on the laws of physics, for analyzing dance technique. Based on a pilot study of four advanced dance techniques, she creates a new model for diagnosing, analyzing and describing basic, intermediate and advanced dance techniques. The name for this model is ``PED,'' which stands for Physics of Expressive Dance. The research design consists of five phases: (1) Conduct a pilot study to analyze several advanced dance techniques chosen from Chinese dance, modem dance, and ballet; (2) Based on learning obtained from the pilot study, create the PED Model for analyzing dance technique; (3) Apply this model to eight categories of dance technique; (4) Select two advanced dance techniques from each category and analyze these sample techniques to demonstrate how the model works; (5) Develop an evaluation framework and use it to evaluate the effectiveness of the model, taking into account both scientific and artistic aspects of dance training. In this study the researcher presents new solutions to three problems highly relevant to dance education: (1) Dancers attempting to learn difficult movements often fail because they are unaware of physics laws; (2) Even those who do master difficult movements can suffer injury due to incorrect training methods; (3) Even the best dancers can waste time learning by trial and error, without scientific instruction. In addition, the researcher discusses how the application of the PED model can benefit dancers, allowing them to avoid inefficient and ineffective movements and freeing them to focus on the artistic expression of dance performance. This study is unique, presenting the first comprehensive system for analyzing dance techniques in terms of physics laws. The results of this study are useful, allowing a new level of awareness about dance techniques that dance professionals can utilize for more effective and efficient teaching and learning. The approach utilized in this study is universal, and can be applied to any dance movement and to any dance style.
A comparison of linear and nonlinear statistical techniques in performance attribution.
Chan, N H; Genovese, C R
2001-01-01
Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.
NASA Astrophysics Data System (ADS)
Lee, Joong Seok; Kang, Yeon June; Kim, Yoon Young
2012-12-01
This paper presents a new modeling technique that can represent acoustically coupled systems in a unified manner. The proposed unified multiphase (UMP) modeling technique uses Biot's equations that are originally derived for poroelastic media to represent not only poroelastic media but also non-poroelastic ones ranging from acoustic and elastic media to septa. To recover the original vibro-acoustic behaviors of non-poroelastic media, material parameters of a base poroelastic medium are adjusted depending on the target media. The real virtue of this UMP technique is that interface coupling conditions between any media can be automatically satisfied, so no medium-dependent interface condition needs to be imposed explicitly. Thereby, the proposed technique can effectively model any acoustically coupled system having locally varying medium phases and evolving interfaces. A typical situation can occur in an iterative design process. Because the proposed UMP modeling technique needs theoretical justifications for further development, this work is mainly focused on how the technique recovers the governing equations of non-poroelastic media and expresses their interface conditions. We also address how to describe various boundary conditions of the media in the technique. Some numerical studies are carried out to demonstrate the validity of the proposed modeling technique.
Real-Time Onboard Global Nonlinear Aerodynamic Modeling from Flight Data
NASA Technical Reports Server (NTRS)
Brandon, Jay M.; Morelli, Eugene A.
2014-01-01
Flight test and modeling techniques were developed to accurately identify global nonlinear aerodynamic models onboard an aircraft. The techniques were developed and demonstrated during piloted flight testing of an Aermacchi MB-326M Impala jet aircraft. Advanced piloting techniques and nonlinear modeling techniques based on fuzzy logic and multivariate orthogonal function methods were implemented with efficient onboard calculations and flight operations to achieve real-time maneuver monitoring and analysis, and near-real-time global nonlinear aerodynamic modeling and prediction validation testing in flight. Results demonstrated that global nonlinear aerodynamic models for a large portion of the flight envelope were identified rapidly and accurately using piloted flight test maneuvers during a single flight, with the final identified and validated models available before the aircraft landed.
NASA Technical Reports Server (NTRS)
Witte, David W.; Huebner, Lawrence D.; Trexler, Carl A.; Cabell, Karen F.; Andrews, Earl H., Jr.
2003-01-01
The scope and significance of propulsion airframe integration (PAI) for hypersonic airbreathing vehicles is presented through a discussion of the PAI test techniques utilized at NASA Langley Research Center. Four primary types of PAI model tests utilized at NASA Langley for hypersonic airbreathing vehicles are discussed. The four types of PAI test models examined are the forebody/inlet test model, the partial-width/truncated propulsion flowpath test model, the powered exhaust simulation test model, and the full-length/width propulsion flowpath test model. The test technique for each of these four types of PAI test models is described, and the relevant PAI issues addressed by each test technique are illustrated through the presentation of recent PAI test data.
Scholz, Stefan; Mittendorf, Thomas
2014-12-01
Rheumatoid arthritis (RA) is a chronic, inflammatory disease with severe effects on the functional ability of patients. Due to the prevalence of 0.5 to 1.0 percent in western countries, new treatment options are a major concern for decision makers with regard to their budget impact. In this context, cost-effectiveness analyses are a helpful tool to evaluate new treatment options for reimbursement schemes. To analyze and compare decision analytic modeling techniques and to explore their use in RA with regard to their advantages and shortcomings. A systematic literature review was conducted in PubMED and 58 studies reporting health economics decision models were analyzed with regard to the modeling technique used. From the 58 reviewed publications, we found 13 reporting decision tree-analysis, 25 (cohort) Markov models, 13 publications on individual sampling methods (ISM) and seven discrete event simulations (DES). Thereby 26 studies were identified as presenting independently developed models and 32 models as adoptions. The modeling techniques used were found to differ in their complexity and in the number of treatment options compared. Methodological features are presented in the article and a comprehensive overview of the cost-effectiveness estimates is given in Additional files 1 and 2. When compared to the other modeling techniques, ISM and DES have advantages in the coverage of patient heterogeneity and, additionally, DES is capable to model more complex treatment sequences and competing risks in RA-patients. Nevertheless, the availability of sufficient data is necessary to avoid assumptions in ISM and DES exercises, thereby enabling biased results. Due to the different settings, time frames and interventions in the reviewed publications, no direct comparison of modeling techniques was applicable. The results from other indications suggest that incremental cost-effective ratios (ICERs) do not differ significantly between Markov and DES models, but DES is able to report more outcome parameters. Given a sufficient data supply, DES is the modeling technique of choice when modeling cost-effectiveness in RA. Otherwise transparency on the data inputs is crucial for valid results and to inform decision makers about possible biases. With regard to ICERs, Markov models might provide similar estimates as more advanced modeling techniques.
Simulations of motor unit number estimation techniques
NASA Astrophysics Data System (ADS)
Major, Lora A.; Jones, Kelvin E.
2005-06-01
Motor unit number estimation (MUNE) is an electrodiagnostic procedure used to evaluate the number of motor axons connected to a muscle. All MUNE techniques rely on assumptions that must be fulfilled to produce a valid estimate. As there is no gold standard to compare the MUNE techniques against, we have developed a model of the relevant neuromuscular physiology and have used this model to simulate various MUNE techniques. The model allows for a quantitative analysis of candidate MUNE techniques that will hopefully contribute to consensus regarding a standard procedure for performing MUNE.
This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati...
Sabots, Obturator and Gas-In-Launch Tube Techniques for Heat Flux Models in Ballistic Ranges
NASA Technical Reports Server (NTRS)
Bogdanoff, David W.; Wilder, Michael C.
2013-01-01
For thermal protection system (heat shield) design for space vehicle entry into earth and other planetary atmospheres, it is essential to know the augmentation of the heat flux due to vehicle surface roughness. At the NASA Ames Hypervelocity Free Flight Aerodynamic Facility (HFFAF) ballistic range, a campaign of heat flux studies on rough models, using infrared camera techniques, has been initiated. Several phenomena can interfere with obtaining good heat flux data when using this measuring technique. These include leakage of the hot drive gas in the gun barrel through joints in the sabot (model carrier) to create spurious thermal imprints on the model forebody, deposition of sabot material on the model forebody, thereby changing the thermal properties of the model surface and unknown in-barrel heating of the model. This report presents developments in launch techniques to greatly reduce or eliminate these problems. The techniques include the use of obturator cups behind the launch package, enclosed versus open front sabot designs and the use of hydrogen gas in the launch tube. Attention also had to be paid to the problem of the obturator drafting behind the model and impacting the model. Of the techniques presented, the obturator cups and hydrogen in the launch tube were successful when properly implemented
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
GLO-STIX: Graph-Level Operations for Specifying Techniques and Interactive eXploration
Stolper, Charles D.; Kahng, Minsuk; Lin, Zhiyuan; Foerster, Florian; Goel, Aakash; Stasko, John; Chau, Duen Horng
2015-01-01
The field of graph visualization has produced a wealth of visualization techniques for accomplishing a variety of analysis tasks. Therefore analysts often rely on a suite of different techniques, and visual graph analysis application builders strive to provide this breadth of techniques. To provide a holistic model for specifying network visualization techniques (as opposed to considering each technique in isolation) we present the Graph-Level Operations (GLO) model. We describe a method for identifying GLOs and apply it to identify five classes of GLOs, which can be flexibly combined to re-create six canonical graph visualization techniques. We discuss advantages of the GLO model, including potentially discovering new, effective network visualization techniques and easing the engineering challenges of building multi-technique graph visualization applications. Finally, we implement the GLOs that we identified into the GLO-STIX prototype system that enables an analyst to interactively explore a graph by applying GLOs. PMID:26005315
Unified Model Deformation and Flow Transition Measurements
NASA Technical Reports Server (NTRS)
Burner, Alpheus W.; Liu, Tianshu; Garg, Sanjay; Bell, James H.; Morgan, Daniel G.
1999-01-01
The number of optical techniques that may potentially be used during a given wind tunnel test is continually growing. These include parameter sensitive paints that are sensitive to temperature or pressure, several different types of off-body and on-body flow visualization techniques, optical angle-of-attack (AoA), optical measurement of model deformation, optical techniques for determining density or velocity, and spectroscopic techniques for determining various flow field parameters. Often in the past the various optical techniques were developed independently of each other, with little or no consideration for other techniques that might also be used during a given test. Recently two optical techniques have been increasingly requested for production measurements in NASA wind tunnels. These are the video photogrammetric (or videogrammetric) technique for measuring model deformation known as the video model deformation (VMD) technique, and the parameter sensitive paints for making global pressure and temperature measurements. Considerations for, and initial attempts at, simultaneous measurements with the pressure sensitive paint (PSP) and the videogrammetric techniques have been implemented. Temperature sensitive paint (TSP) has been found to be useful for boundary-layer transition detection since turbulent boundary layers convect heat at higher rates than laminar boundary layers of comparable thickness. Transition is marked by a characteristic surface temperature change wherever there is a difference between model and flow temperatures. Recently, additional capabilities have been implemented in the target-tracking videogrammetric measurement system. These capabilities have permitted practical simultaneous measurements using parameter sensitive paint and video model deformation measurements that led to the first successful unified test with TSP for transition detection in a large production wind tunnel.
Increasing the reliability of ecological models using modern software engineering techniques
Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff
2009-01-01
Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...
Randomized Item Response Theory Models
ERIC Educational Resources Information Center
Fox, Jean-Paul
2005-01-01
The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by an item response theory (IRT) model. The RR…
Discrete-time modelling of musical instruments
NASA Astrophysics Data System (ADS)
Välimäki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti
2006-01-01
This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.
On Using Meta-Modeling and Multi-Modeling to Address Complex Problems
ERIC Educational Resources Information Center
Abu Jbara, Ahmed
2013-01-01
Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…
NASA Technical Reports Server (NTRS)
Burk, S. M., Jr.; Wilson, C. F., Jr.
1975-01-01
A relatively inexpensive radio-controlled model stall/spin test technique was developed. Operational experiences using the technique are presented. A discussion of model construction techniques, spin-recovery parachute system, data recording system, and movie camera tracking system is included. Also discussed are a method of measuring moments of inertia, scaling of engine thrust, cost and time required to conduct a program, and examples of the results obtained from the flight tests.
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Videogrammetric Model Deformation Measurement Technique
NASA Technical Reports Server (NTRS)
Burner, A. W.; Liu, Tian-Shu
2001-01-01
The theory, methods, and applications of the videogrammetric model deformation (VMD) measurement technique used at NASA for wind tunnel testing are presented. The VMD technique, based on non-topographic photogrammetry, can determine static and dynamic aeroelastic deformation and attitude of a wind-tunnel model. Hardware of the system includes a video-rate CCD camera, a computer with an image acquisition frame grabber board, illumination lights, and retroreflective or painted targets on a wind tunnel model. Custom software includes routines for image acquisition, target-tracking/identification, target centroid calculation, camera calibration, and deformation calculations. Applications of the VMD technique at five large NASA wind tunnels are discussed.
NASA Technical Reports Server (NTRS)
Towner, Robert L.; Band, Jonathan L.
2012-01-01
An analysis technique was developed to compare and track mode shapes for different Finite Element Models. The technique may be applied to a variety of structural dynamics analyses, including model reduction validation (comparing unreduced and reduced models), mode tracking for various parametric analyses (e.g., launch vehicle model dispersion analysis to identify sensitivities to modal gain for Guidance, Navigation, and Control), comparing models of different mesh fidelity (e.g., a coarse model for a preliminary analysis compared to a higher-fidelity model for a detailed analysis) and mode tracking for a structure with properties that change over time (e.g., a launch vehicle from liftoff through end-of-burn, with propellant being expended during the flight). Mode shapes for different models are compared and tracked using several numerical indicators, including traditional Cross-Orthogonality and Modal Assurance Criteria approaches, as well as numerical indicators obtained by comparing modal strain energy and kinetic energy distributions. This analysis technique has been used to reliably identify correlated mode shapes for complex Finite Element Models that would otherwise be difficult to compare using traditional techniques. This improved approach also utilizes an adaptive mode tracking algorithm that allows for automated tracking when working with complex models and/or comparing a large group of models.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Cunningham, Kevin; Hill, Melissa A.
2013-01-01
Flight test and modeling techniques were developed for efficiently identifying global aerodynamic models that can be used to accurately simulate stall, upset, and recovery on large transport airplanes. The techniques were developed and validated in a high-fidelity fixed-base flight simulator using a wind-tunnel aerodynamic database, realistic sensor characteristics, and a realistic flight deck representative of a large transport aircraft. Results demonstrated that aerodynamic models for stall, upset, and recovery can be identified rapidly and accurately using relatively simple piloted flight test maneuvers. Stall maneuver predictions and comparisons of identified aerodynamic models with data from the underlying simulation aerodynamic database were used to validate the techniques.
A pilot modeling technique for handling-qualities research
NASA Technical Reports Server (NTRS)
Hess, R. A.
1980-01-01
A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.
A Method to Test Model Calibration Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
A Method to Test Model Calibration Techniques: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
Mohiuddin, Syed
2014-08-01
Bipolar disorder (BD) is a chronic and relapsing mental illness with a considerable health-related and economic burden. The primary goal of pharmacotherapeutics for BD is to improve patients' well-being. The use of decision-analytic models is key in assessing the added value of the pharmacotherapeutics aimed at treating the illness, but concerns have been expressed about the appropriateness of different modelling techniques and about the transparency in the reporting of economic evaluations. This paper aimed to identify and critically appraise published model-based economic evaluations of pharmacotherapeutics in BD patients. A systematic review combining common terms for BD and economic evaluation was conducted in MEDLINE, EMBASE, PSYCINFO and ECONLIT. Studies identified were summarised and critically appraised in terms of the use of modelling technique, model structure and data sources. Considering the prognosis and management of BD, the possible benefits and limitations of each modelling technique are discussed. Fourteen studies were identified using model-based economic evaluations of pharmacotherapeutics in BD patients. Of these 14 studies, nine used Markov, three used discrete-event simulation (DES) and two used decision-tree models. Most of the studies (n = 11) did not include the rationale for the choice of modelling technique undertaken. Half of the studies did not include the risk of mortality. Surprisingly, no study considered the risk of having a mixed bipolar episode. This review identified various modelling issues that could potentially reduce the comparability of one pharmacotherapeutic intervention with another. Better use and reporting of the modelling techniques in the future studies are essential. DES modelling appears to be a flexible and comprehensive technique for evaluating the comparability of BD treatment options because of its greater flexibility of depicting the disease progression over time. However, depending on the research question, modelling techniques other than DES might also be appropriate in some cases.
2014-03-27
fidelity. This pairing is accomplished through the use of a space mapping technique, which is a process where the design space of a lower fidelity model...is aligned a higher fidelity model. The intent of applying space mapping techniques to the field of surrogate construction is to leverage the
Modeling and prototyping of biometric systems using dataflow programming
NASA Astrophysics Data System (ADS)
Minakova, N.; Petrov, I.
2018-01-01
The development of biometric systems is one of the labor-intensive processes. Therefore, the creation and analysis of approaches and techniques is an urgent task at present. This article presents a technique of modeling and prototyping biometric systems based on dataflow programming. The technique includes three main stages: the development of functional blocks, the creation of a dataflow graph and the generation of a prototype. A specially developed software modeling environment that implements this technique is described. As an example of the use of this technique, an example of the implementation of the iris localization subsystem is demonstrated. A variant of modification of dataflow programming is suggested to solve the problem related to the undefined order of block activation. The main advantage of the presented technique is the ability to visually display and design the model of the biometric system, the rapid creation of a working prototype and the reuse of the previously developed functional blocks.
Evaluation of impression accuracy for a four-implant mandibular model--a digital approach.
Stimmelmayr, Michael; Erdelt, Kurt; Güth, Jan-Frederik; Happe, Arndt; Beuer, Florian
2012-08-01
Implant-supported prosthodontics requires precise impressions to achieve a passive fit. Since the early 1990s, in vitro studies comparing different implant impression techniques were performed, capturing the data mostly mechanically. The purpose of this study was to evaluate the accuracy of three different impression techniques digitally. Dental implants were inserted bilaterally in ten polymer lower-arch models at the positions of the first molars and canines. From each original model, three different impressions (A, transfer; B, pick-up; and C, splinted pick-up) were taken. Scan-bodies were mounted on the implants of the polymer and on the lab analogues of the stone models and digitized. The scan-body in position 36 (FDI) of the digitized original and master casts were each superimposed, and the deviations of the remaining three scan-bodies were measured three-dimensionally. The systematic error of digitizing the models was 13 μm for the polymer and 5 μm for the stone model. The mean discrepancies of the original model to the stone casts were 124 μm (±34) μm for the transfer technique, 116 (±46) μm for the pick-up technique, and 80 (±25) μm for the splinted pick-up technique. There were statistically significant discrepancies between the evaluated impression techniques (p ≤ 0.025; ANOVA test). The splinted pick-up impression showed the least deviation between original and stone model; transfer and pick-up techniques showed similar results. For better accuracy of implant-supported prosthodontics, the splinted pick-up technique should be used for impressions of four implants evenly spread in edentulous jaws.
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid
2017-02-01
Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.
Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F
2012-01-01
Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Aerodynamic force measurement on a large-scale model in a short duration test facility
NASA Astrophysics Data System (ADS)
Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.
2005-03-01
A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.
Applying knowledge compilation techniques to model-based reasoning
NASA Technical Reports Server (NTRS)
Keller, Richard M.
1991-01-01
Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.
Application of the weighted total field-scattering field technique to 3D-PSTD light scattering model
NASA Astrophysics Data System (ADS)
Hu, Shuai; Gao, Taichang; Liu, Lei; Li, Hao; Chen, Ming; Yang, Bo
2018-04-01
PSTD (Pseudo Spectral Time Domain) is an excellent model for the light scattering simulation of nonspherical aerosol particles. However, due to the particularity of its discretization form of the Maxwell's equations, the traditional Total Field/Scattering Field (TF/SF) technique for FDTD (Finite Differential Time Domain) is not applicable to PSTD, and the time-consuming pure scattering field technique is mainly applied to introduce the incident wave. To this end, the weighted TF/SF technique proposed by X. Gao is generalized and applied to the 3D-PSTD scattering model. Using this technique, the incident light can be effectively introduced by modifying the electromagnetic components in an inserted connecting region between the total field and the scattering field region with incident terms, where the incident terms are obtained by weighting the incident field by a window function. To optimally determine the thickness of connection region and the window function type for PSTD calculations, their influence on the modeling accuracy is firstly analyzed. To further verify the effectiveness and advantages of the weighted TF/SF technique, the improved PSTD model is validated against the PSTD model equipped with pure scattering field technique in both calculation accuracy and efficiency. The results show that, the performance of PSTD seems to be not sensitive to variation of window functions. The number of the connection layer required decreases with the increasing of spatial resolution, where for spatial resolution of 24 grids per wavelength, a 6-layer region is thick enough. The scattering phase matrices and integral scattering parameters obtained by the improved PSTD show an excellent consistency with those well-tested models for spherical and nonspherical particles, illustrating that the weighted TF/SF technique can introduce the incident precisely. The weighted TF/SF technique shows higher computational efficiency than pure scattering technique.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Rapid prototyping model for percutaneous nephrolithotomy training.
Bruyère, Franck; Leroux, Cecile; Brunereau, Laurent; Lermusiaux, Patrick
2008-01-01
Rapid prototyping is a technique used for creating computer images in three dimensions more efficiently than classic techniques. Percutaneous nephrolithotomy (PCNL) is a popular method to remove kidney stones; however, broader use by the urologic community has been hampered by the morbidity associated with needle puncture to gain access to the renal calix (bleeding, pneumothorax, hydrothorax, inadvertent colon injury). A training model to improve technique and understanding of renal anatomy could improve complications related to renal puncture; however, no model currently exists for resident training. We created a training model using the rapid prototyping technique based on abdominal CT images of a patient scheduled to undergo PCNL. This allowed our staff and residents to train on the model before performing the operation. This model allowed anticipation of particular difficulties inherent to the patient's anatomy. After training, the procedure proceeded without complication, and the patient was discharged at postoperative day 1 without problems. We hypothesize that rapid prototyping could be useful for resident education, allowing the creation of numerous models for research and surgical training. In addition, we anticipate that experienced urologists could find this technique helpful in preparation for difficult PCNL operations.
A novel CT acquisition and analysis technique for breathing motion modeling
NASA Astrophysics Data System (ADS)
Low, Daniel A.; White, Benjamin M.; Lee, Percy P.; Thomas, David H.; Gaudio, Sergio; Jani, Shyam S.; Wu, Xiao; Lamb, James M.
2013-06-01
To report on a novel technique for providing artifact-free quantitative four-dimensional computed tomography (4DCT) image datasets for breathing motion modeling. Commercial clinical 4DCT methods have difficulty managing irregular breathing. The resulting images contain motion-induced artifacts that can distort structures and inaccurately characterize breathing motion. We have developed a novel scanning and analysis method for motion-correlated CT that utilizes standard repeated fast helical acquisitions, a simultaneous breathing surrogate measurement, deformable image registration, and a published breathing motion model. The motion model differs from the CT-measured motion by an average of 0.65 mm, indicating the precision of the motion model. The integral of the divergence of one of the motion model parameters is predicted to be a constant 1.11 and is found in this case to be 1.09, indicating the accuracy of the motion model. The proposed technique shows promise for providing motion-artifact free images at user-selected breathing phases, accurate Hounsfield units, and noise characteristics similar to non-4D CT techniques, at a patient dose similar to or less than current 4DCT techniques.
This study evaluates interior nudging techniques using the Weather Research and Forecasting (WRF) model for regional climate modeling over the conterminous United States (CONUS) using a two-way nested configuration. NCEP–Department of Energy Atmospheric Model Intercomparison Pro...
Simulation of the Fissureless Technique for Thoracoscopic Segmentectomy Using Rapid Prototyping
Nakada, Takeo; Inagaki, Takuya
2014-01-01
The fissureless lobectomy or anterior fissureless technique is a novel surgical technique, which avoids dissection of the lung parenchyma over the pulmonary artery during lobectomy by open thoracotomy approach or direct vision thoracoscopic surgery. This technique is indicated for fused lobes. We present two cases where thoracoscopic pulmonary segmentectomy was performed using the fissureless technique simulated by three-dimensional (3D) pulmonary models. The 3D model and rapid prototyping provided an accurate anatomical understanding of the operative field in both cases. We believe that the construction of these models is useful for thoracoscopic and other complicated surgeries of the chest. PMID:24633132
Simulation of the fissureless technique for thoracoscopic segmentectomy using rapid prototyping.
Akiba, Tadashi; Nakada, Takeo; Inagaki, Takuya
2015-01-01
The fissureless lobectomy or anterior fissureless technique is a novel surgical technique, which avoids dissection of the lung parenchyma over the pulmonary artery during lobectomy by open thoracotomy approach or direct vision thoracoscopic surgery. This technique is indicated for fused lobes. We present two cases where thoracoscopic pulmonary segmentectomy was performed using the fissureless technique simulated by three-dimensional (3D) pulmonary models. The 3D model and rapid prototyping provided an accurate anatomical understanding of the operative field in both cases. We believe that the construction of these models is useful for thoracoscopic and other complicated surgeries of the chest.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
Spatial Assessment of Model Errors from Four Regression Techniques
Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove
2005-01-01
Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...
Model-based RSA of a femoral hip stem using surface and geometrical shape models.
Kaptein, Bart L; Valstar, Edward R; Spoor, Cees W; Stoel, Berend C; Rozing, Piet M
2006-07-01
Roentgen stereophotogrammetry (RSA) is a highly accurate three-dimensional measuring technique for assessing micromotion of orthopaedic implants. A drawback is that markers have to be attached to the implant. Model-based techniques have been developed to prevent using special marked implants. We compared two model-based RSA methods with standard marker-based RSA techniques. The first model-based RSA method used surface models, and the second method used elementary geometrical shape (EGS) models. We used a commercially available stem to perform experiments with a phantom as well as reanalysis of patient RSA radiographs. The data from the phantom experiment indicated the accuracy and precision of the elementary geometrical shape model-based RSA method is equal to marker-based RSA. For model-based RSA using surface models, the accuracy is equal to the accuracy of marker-based RSA, but its precision is worse. We found no difference in accuracy and precision between the two model-based RSA techniques in clinical data. For this particular hip stem, EGS model-based RSA is a good alternative for marker-based RSA.
NASA Technical Reports Server (NTRS)
OBrien, T. Kevin (Technical Monitor); Krueger, Ronald; Minguet, Pierre J.
2004-01-01
The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to tension and three-point bending was studied. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/3D simulations were in good agreement with results obtained from full solid models. The good correlation of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents. In addition, the application of the submodeling technique for the simulation of skin/stringer debond was also studied. Global models made of shell elements and solid elements were studied. Solid elements were used for local submodels, which extended between three and six specimen thicknesses on either side of the delamination front to model the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from the simulations using the submodeling technique were not in agreement with results obtained from full solid models.
A thermal scale modeling study for Apollo and Apollo applications, volume 2
NASA Technical Reports Server (NTRS)
Shannon, R. L.
1972-01-01
The development and demonstration of practical thermal scale modeling techniques applicable to systems involving radiation, conduction, and convection with emphasis on cabin atmosphere/cabin wall thermal interface are discussed. The Apollo spacecraft environment is used as the model. Four possible scaling techniques were considered: (1) modified material preservation, (2) temperature preservation, (3) scaling compromises, and Nusselt number preservation. A thermal mathematical model was developed for use with the Nusselt number preservation technique.
Automation of energy demand forecasting
NASA Astrophysics Data System (ADS)
Siddique, Sanzad
Automation of energy demand forecasting saves time and effort by searching automatically for an appropriate model in a candidate model space without manual intervention. This thesis introduces a search-based approach that improves the performance of the model searching process for econometrics models. Further improvements in the accuracy of the energy demand forecasting are achieved by integrating nonlinear transformations within the models. This thesis introduces machine learning techniques that are capable of modeling such nonlinearity. Algorithms for learning domain knowledge from time series data using the machine learning methods are also presented. The novel search based approach and the machine learning models are tested with synthetic data as well as with natural gas and electricity demand signals. Experimental results show that the model searching technique is capable of finding an appropriate forecasting model. Further experimental results demonstrate an improved forecasting accuracy achieved by using the novel machine learning techniques introduced in this thesis. This thesis presents an analysis of how the machine learning techniques learn domain knowledge. The learned domain knowledge is used to improve the forecast accuracy.
NASA Astrophysics Data System (ADS)
Adler, Ronald S.; Swanson, Scott D.; Yeung, Hong N.
1996-01-01
A projection-operator technique is applied to a general three-component model for magnetization transfer, extending our previous two-component model [R. S. Adler and H. N. Yeung,J. Magn. Reson. A104,321 (1993), and H. N. Yeung, R. S. Adler, and S. D. Swanson,J. Magn. Reson. A106,37 (1994)]. The PO technique provides an elegant means of deriving a simple, effective rate equation in which there is natural separation of relaxation and source terms and allows incorporation of Redfield-Provotorov theory without any additional assumptions or restrictive conditions. The PO technique is extended to incorporate more general, multicomponent models. The three-component model is used to fit experimental data from samples of human hyaline cartilage and fibrocartilage. The fits of the three-component model are compared to the fits of the two-component model.
NASA Astrophysics Data System (ADS)
Amicarelli, A.; Gariazzo, C.; Finardi, S.; Pelliccioni, A.; Silibello, C.
2008-05-01
Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM10 concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode.
Bearing Fault Diagnosis by a Robust Higher-Order Super-Twisting Sliding Mode Observer
Kim, Jong-Myon
2018-01-01
An effective bearing fault detection and diagnosis (FDD) model is important for ensuring the normal and safe operation of machines. This paper presents a reliable model-reference observer technique for FDD based on modeling of a bearing’s vibration data by analyzing the dynamic properties of the bearing and a higher-order super-twisting sliding mode observation (HOSTSMO) technique for making diagnostic decisions using these data models. The HOSTSMO technique can adaptively improve the performance of estimating nonlinear failures in rolling element bearings (REBs) over a linear approach by modeling 5 degrees of freedom under normal and faulty conditions. The effectiveness of the proposed technique is evaluated using a vibration dataset provided by Case Western Reserve University, which consists of vibration acceleration signals recorded for REBs with inner, outer, ball, and no faults, i.e., normal. Experimental results indicate that the proposed technique outperforms the ARX-Laguerre proportional integral observation (ALPIO) technique, yielding 18.82%, 16.825%, and 17.44% performance improvements for three levels of crack severity of 0.007, 0.014, and 0.021 inches, respectively. PMID:29642459
Bearing Fault Diagnosis by a Robust Higher-Order Super-Twisting Sliding Mode Observer.
Piltan, Farzin; Kim, Jong-Myon
2018-04-07
An effective bearing fault detection and diagnosis (FDD) model is important for ensuring the normal and safe operation of machines. This paper presents a reliable model-reference observer technique for FDD based on modeling of a bearing's vibration data by analyzing the dynamic properties of the bearing and a higher-order super-twisting sliding mode observation (HOSTSMO) technique for making diagnostic decisions using these data models. The HOSTSMO technique can adaptively improve the performance of estimating nonlinear failures in rolling element bearings (REBs) over a linear approach by modeling 5 degrees of freedom under normal and faulty conditions. The effectiveness of the proposed technique is evaluated using a vibration dataset provided by Case Western Reserve University, which consists of vibration acceleration signals recorded for REBs with inner, outer, ball, and no faults, i.e., normal. Experimental results indicate that the proposed technique outperforms the ARX-Laguerre proportional integral observation (ALPIO) technique, yielding 18.82%, 16.825%, and 17.44% performance improvements for three levels of crack severity of 0.007, 0.014, and 0.021 inches, respectively.
Teaching "Instant Experience" with Graphical Model Validation Techniques
ERIC Educational Resources Information Center
Ekstrøm, Claus Thorn
2014-01-01
Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.
Comparison of Sequential and Variational Data Assimilation
NASA Astrophysics Data System (ADS)
Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht
2017-04-01
Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.
Estimation of Unsteady Aerodynamic Models from Dynamic Wind Tunnel Data
NASA Technical Reports Server (NTRS)
Murphy, Patrick; Klein, Vladislav
2011-01-01
Demanding aerodynamic modelling requirements for military and civilian aircraft have motivated researchers to improve computational and experimental techniques and to pursue closer collaboration in these areas. Model identification and validation techniques are key components for this research. This paper presents mathematical model structures and identification techniques that have been used successfully to model more general aerodynamic behaviours in single-degree-of-freedom dynamic testing. Model parameters, characterizing aerodynamic properties, are estimated using linear and nonlinear regression methods in both time and frequency domains. Steps in identification including model structure determination, parameter estimation, and model validation, are addressed in this paper with examples using data from one-degree-of-freedom dynamic wind tunnel and water tunnel experiments. These techniques offer a methodology for expanding the utility of computational methods in application to flight dynamics, stability, and control problems. Since flight test is not always an option for early model validation, time history comparisons are commonly made between computational and experimental results and model adequacy is inferred by corroborating results. An extension is offered to this conventional approach where more general model parameter estimates and their standard errors are compared.
Finite Volume Numerical Methods for Aeroheating Rate Calculations from Infrared Thermographic Data
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Berry, Scott A.; Horvath, Thomas J.; Nowak, Robert J.
2003-01-01
The use of multi-dimensional finite volume numerical techniques with finite thickness models for calculating aeroheating rates from measured global surface temperatures on hypersonic wind tunnel models was investigated. Both direct and inverse finite volume techniques were investigated and compared with the one-dimensional semi -infinite technique. Global transient surface temperatures were measured using an infrared thermographic technique on a 0.333-scale model of the Hyper-X forebody in the Langley Research Center 20-Inch Mach 6 Air tunnel. In these tests the effectiveness of vortices generated via gas injection for initiating hypersonic transition on the Hyper-X forebody were investigated. An array of streamwise orientated heating striations were generated and visualized downstream of the gas injection sites. In regions without significant spatial temperature gradients, one-dimensional techniques provided accurate aeroheating rates. In regions with sharp temperature gradients due to the striation patterns two-dimensional heat transfer techniques were necessary to obtain accurate heating rates. The use of the one-dimensional technique resulted in differences of 20% in the calculated heating rates because it did not account for lateral heat conduction in the model.
Molecular Modeling in Drug Design for the Development of Organophosphorus Antidotes/Prophylactics.
1986-06-01
multidimensional statistical QSAR analysis techniques to suggest new structures for synthesis and evaluation. C. Application of quantum chemical techniques to...compounds for synthesis and testing for antidotal potency. E. Use of computer-assisted methods to determine the steric constraints at the active site...modeling techniques to model the enzyme acetylcholinester-se. H. Suggestion of some novel compounds for synthesis and testing for reactivating
2010-09-01
ADVANCEMENT OF TECHNIQUES FOR MODELING THE EFFECTS OF ATMOSPHERIC GRAVITY-WAVE-INDUCED INHOMOGENEITIES ON INFRASOUND PROPAGATION Robert G...number of infrasound observations indicate that fine-scale atmospheric inhomogeneities contribute to infrasonic arrivals that are not predicted by...standard modeling techniques. In particular, gravity waves, or buoyancy waves, are believed to contribute to the multipath nature of infrasound
Next generation initiation techniques
NASA Technical Reports Server (NTRS)
Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans
1993-01-01
Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The third kind of next-generation technique involves strategies to initialize convective scale (non-hydrostatic) models.
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
Verification of Orthogrid Finite Element Modeling Techniques
NASA Technical Reports Server (NTRS)
Steeve, B. E.
1996-01-01
The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.
Jaspers, Arne; De Beéck, Tim Op; Brink, Michel S; Frencken, Wouter G P; Staes, Filip; Davis, Jesse J; Helsen, Werner F
2018-05-01
Machine learning may contribute to understanding the relationship between the external load and internal load in professional soccer. Therefore, the relationship between external load indicators (ELIs) and the rating of perceived exertion (RPE) was examined using machine learning techniques on a group and individual level. Training data were collected from 38 professional soccer players over 2 seasons. The external load was measured using global positioning system technology and accelerometry. The internal load was obtained using the RPE. Predictive models were constructed using 2 machine learning techniques, artificial neural networks and least absolute shrinkage and selection operator (LASSO) models, and 1 naive baseline method. The predictions were based on a large set of ELIs. Using each technique, 1 group model involving all players and 1 individual model for each player were constructed. These models' performance on predicting the reported RPE values for future training sessions was compared with the naive baseline's performance. Both the artificial neural network and LASSO models outperformed the baseline. In addition, the LASSO model made more accurate predictions for the RPE than did the artificial neural network model. Furthermore, decelerations were identified as important ELIs. Regardless of the applied machine learning technique, the group models resulted in equivalent or better predictions for the reported RPE values than the individual models. Machine learning techniques may have added value in predicting RPE for future sessions to optimize training design and evaluation. These techniques may also be used in conjunction with expert knowledge to select key ELIs for load monitoring.
Li, Jing-Sheng; Tsai, Tsung-Yuan; Wang, Shaobai; Li, Pingyue; Kwon, Young-Min; Freiberg, Andrew; Rubash, Harry E.; Li, Guoan
2014-01-01
Using computed tomography (CT) or magnetic resonance (MR) images to construct 3D knee models has been widely used in biomedical engineering research. Statistical shape modeling (SSM) method is an alternative way to provide a fast, cost-efficient, and subject-specific knee modeling technique. This study was aimed to evaluate the feasibility of using a combined dual-fluoroscopic imaging system (DFIS) and SSM method to investigate in vivo knee kinematics. Three subjects were studied during a treadmill walking. The data were compared with the kinematics obtained using a CT-based modeling technique. Geometric root-mean-square (RMS) errors between the knee models constructed using the SSM and CT-based modeling techniques were 1.16 mm and 1.40 mm for the femur and tibia, respectively. For the kinematics of the knee during the treadmill gait, the SSM model can predict the knee kinematics with RMS errors within 3.3 deg for rotation and within 2.4 mm for translation throughout the stance phase of the gait cycle compared with those obtained using the CT-based knee models. The data indicated that the combined DFIS and SSM technique could be used for quick evaluation of knee joint kinematics. PMID:25320846
40 CFR 68.28 - Alternative release scenario analysis.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Consequence Analysis Guidance or any commercially or publicly available air dispersion modeling techniques, provided the techniques account for the specified modeling conditions and are recognized by industry as applicable as part of current practices. Proprietary models that account for the modeling conditions may be...
40 CFR 68.28 - Alternative release scenario analysis.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Consequence Analysis Guidance or any commercially or publicly available air dispersion modeling techniques, provided the techniques account for the specified modeling conditions and are recognized by industry as applicable as part of current practices. Proprietary models that account for the modeling conditions may be...
40 CFR 68.28 - Alternative release scenario analysis.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Consequence Analysis Guidance or any commercially or publicly available air dispersion modeling techniques, provided the techniques account for the specified modeling conditions and are recognized by industry as applicable as part of current practices. Proprietary models that account for the modeling conditions may be...
Models of Purposive Human Organization: A Comparative Study
1984-02-01
develop techniques for organizational diagnosis with the D-M model, to be followed by intervention by S-T methodology. 2. Introduction 2.1. Background In...relational and object data for Dinnat-Murphree model construction. 2. Develop techniques for organizational diagnosis with the Dinnat-Murphree model
Application of zonal model on indoor air sensor network design
NASA Astrophysics Data System (ADS)
Chen, Y. Lisa; Wen, Jin
2007-04-01
Growing concerns over the safety of the indoor environment have made the use of sensors ubiquitous. Sensors that detect chemical and biological warfare agents can offer early warning of dangerous contaminants. However, current sensor system design is more informed by intuition and experience rather by systematic design. To develop a sensor system design methodology, a proper indoor airflow modeling approach is needed. Various indoor airflow modeling techniques, from complicated computational fluid dynamics approaches to simplified multi-zone approaches, exist in the literature. In this study, the effects of two airflow modeling techniques, multi-zone modeling technique and zonal modeling technique, on indoor air protection sensor system design are discussed. Common building attack scenarios, using a typical CBW agent, are simulated. Both multi-zone and zonal models are used to predict airflows and contaminant dispersion. Genetic Algorithm is then applied to optimize the sensor location and quantity. Differences in the sensor system design resulting from the two airflow models are discussed for a typical office environment and a large hall environment.
3D surface pressure measurement with single light-field camera and pressure-sensitive paint
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth
2018-05-01
A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.
Scoping Study Investigating PWR Instrumentation during a Severe Accident Scenario
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rempe, J. L.; Knudson, D. L.; Lutz, R. J.
The accidents at the Three Mile Island Unit 2 (TMI-2) and Fukushima Daiichi Units 1, 2, and 3 nuclear power plants demonstrate the critical importance of accurate, relevant, and timely information on the status of reactor systems during a severe accident. These events also highlight the critical importance of understanding and focusing on the key elements of system status information in an environment where operators may be overwhelmed with superfluous and sometimes conflicting data. While progress in these areas has been made since TMI-2, the events at Fukushima suggests that there may still be a potential need to ensure thatmore » critical plant information is available to plant operators. Recognizing the significant technical and economic challenges associated with plant modifications, it is important to focus on instrumentation that can address these information critical needs. As part of a program initiated by the Department of Energy, Office of Nuclear Energy (DOE-NE), a scoping effort was initiated to assess critical information needs identified for severe accident management and mitigation in commercial Light Water Reactors (LWRs), to quantify the environment instruments monitoring this data would have to survive, and to identify gaps where predicted environments exceed instrumentation qualification envelop (QE) limits. Results from the Pressurized Water Reactor (PWR) scoping evaluations are documented in this report. The PWR evaluations were limited in this scoping evaluation to quantifying the environmental conditions for an unmitigated Short-Term Station BlackOut (STSBO) sequence in one unit at the Surry nuclear power station. Results were obtained using the MELCOR models developed for the US Nuclear Regulatory Commission (NRC)-sponsored State of the Art Consequence Assessment (SOARCA) program project. Results from this scoping evaluation indicate that some instrumentation identified to provide critical information would be exposed to conditions that significantly exceeded QE limits for extended time periods for the low frequency STSBO sequence evaluated in this study. It is recognized that the core damage frequency (CDF) of the sequence evaluated in this scoping effort would be considerably lower if evaluations considered new FLEX equipment being installed by industry. Nevertheless, because of uncertainties in instrumentation response when exposed to conditions beyond QE limits and alternate challenges associated with different sequences that may impact sensor performance, it is recommended that additional evaluations of instrumentation performance be completed to provide confidence that operators have access to accurate, relevant, and timely information on the status of reactor systems for a broad range of challenges associated with risk important severe accident sequences.« less
Application of Discrete Fracture Modeling and Upscaling Techniques to Complex Fractured Reservoirs
NASA Astrophysics Data System (ADS)
Karimi-Fard, M.; Lapene, A.; Pauget, L.
2012-12-01
During the last decade, an important effort has been made to improve data acquisition (seismic and borehole imaging) and workflow for reservoir characterization which has greatly benefited the description of fractured reservoirs. However, the geological models resulting from the interpretations need to be validated or calibrated against dynamic data. Flow modeling in fractured reservoirs remains a challenge due to the difficulty of representing mass transfers at different heterogeneity scales. The majority of the existing approaches are based on dual continuum representation where the fracture network and the matrix are represented separately and their interactions are modeled using transfer functions. These models are usually based on idealized representation of the fracture distribution which makes the integration of real data difficult. In recent years, due to increases in computer power, discrete fracture modeling techniques (DFM) are becoming popular. In these techniques the fractures are represented explicitly allowing the direct use of data. In this work we consider the DFM technique developed by Karimi-Fard et al. [1] which is based on an unstructured finite-volume discretization. The mass flux between two adjacent control-volumes is evaluated using an optimized two-point flux approximation. The result of the discretization is a list of control-volumes with the associated pore-volumes and positions, and a list of connections with the associated transmissibilities. Fracture intersections are simplified using a connectivity transformation which contributes considerably to the efficiency of the methodology. In addition, the method is designed for general purpose simulators and any connectivity based simulator can be used for flow simulations. The DFM technique is either used standalone or as part of an upscaling technique. The upscaling techniques are required for large reservoirs where the explicit representation of all fractures and faults is not possible. Karimi-Fard et al. [2] have developed an upscaling technique based on DFM representation. The original version of this technique was developed to construct a dual-porosity model from a discrete fracture description. This technique has been extended and generalized so it can be applied to a wide range of problems from reservoirs with a few or no fracture to highly fractured reservoirs. In this work, we present the application of these techniques to two three-dimensional fractured reservoirs constructed using real data. The first model contains more than 600 medium and large scale fractures. The fractures are not always connected which requires a general modeling technique. The reservoir has 50 wells (injectors and producers) and water flooding simulations are performed. The second test case is a larger reservoir with sparsely distributed faults. Single-phase simulations are performed with 5 producing wells. [1] Karimi-Fard M., Durlofsky L.J., and Aziz K. 2004. An efficient discrete-fracture model applicable for general-purpose reservoir simulators. SPE Journal, 9(2): 227-236. [2] Karimi-Fard M., Gong B., and Durlofsky L.J. 2006. Generation of coarse-scale continuum flow models from detailed fracture characterizations. Water Resources Research, 42(10): W10423.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Srutarshi; Rajan, Rehim N.; Singh, Sandeep K.
2014-07-01
DC Accelerators undergoes different types of discharges during its operation. A model depicting the discharges has been simulated to study the different transient conditions. The paper presents a Physics based approach of developing a compact circuit model of the DC Accelerator using Partial Element Equivalent Circuit (PEEC) technique. The equivalent RLC model aids in analyzing the transient behavior of the system and predicting anomalies in the system. The electrical discharges and its properties prevailing in the accelerator can be evaluated by this equivalent model. A parallel coupled voltage multiplier structure is simulated in small scale using few stages of coronamore » guards and the theoretical and practical results are compared. The PEEC technique leads to a simple model for studying the fault conditions in accelerator systems. Compared to the Finite Element Techniques, this technique gives the circuital representation. The lumped components of the PEEC are used to obtain the input impedance and the result is also compared to that of the FEM technique for a frequency range of (0-200) MHz. (author)« less
Survey of statistical techniques used in validation studies of air pollution prediction models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bornstein, R D; Anderson, S F
1979-03-01
Statistical techniques used by meteorologists to validate predictions made by air pollution models are surveyed. Techniques are divided into the following three groups: graphical, tabular, and summary statistics. Some of the practical problems associated with verification are also discussed. Characteristics desired in any validation program are listed and a suggested combination of techniques that possesses many of these characteristics is presented.
NASA Technical Reports Server (NTRS)
Wiswell, E. R.; Cooper, G. R. (Principal Investigator)
1978-01-01
The author has identified the following significant results. The concept of average mutual information in the received spectral random process about the spectral scene was developed. Techniques amenable to implementation on a digital computer were also developed to make the required average mutual information calculations. These techniques required identification of models for the spectral response process of scenes. Stochastic modeling techniques were adapted for use. These techniques were demonstrated on empirical data from wheat and vegetation scenes.
DeHoff, Mary Ellen; Clark, Krista L; Meganathan, Karthikeyan
2011-03-01
Alternatives and/or supplements to animal dissection are being explored by educators of human anatomy at different academic levels. Clay modeling is one such alternative that provides a kinesthetic, three-dimensional, constructive, and sensory approach to learning human anatomy. The present study compared two laboratory techniques, clay modeling of human anatomy and dissection of preserved cat specimens, in the instruction of muscles, peripheral nerves, and blood vessels. Specifically, we examined the effect of each technique on student performance on low-order and high-order questions related to each body system as well as the student-perceived value of each technique. Students who modeled anatomic structures in clay scored significantly higher on low-order questions related to peripheral nerves; scores were comparable between groups for high-order questions on peripheral nerves and for questions on muscles and blood vessels. Likert-scale surveys were used to measure student responses to statements about each laboratory technique. A significantly greater percentage of students in the clay modeling group "agreed" or "strongly agreed" with positive statements about their respective technique. These results indicate that clay modeling and cat dissection are equally effective in achieving student learning outcomes for certain systems in undergraduate human anatomy. Furthermore, clay modeling appears to be the preferred technique based on students' subjective perceptions of value to their learning experience.
Fusing modeling techniques to support domain analysis for reuse opportunities identification
NASA Technical Reports Server (NTRS)
Hall, Susan Main; Mcguire, Eileen
1993-01-01
Functional modeling techniques or object-oriented graphical representations, which are more useful to someone trying to understand the general design or high level requirements of a system? For a recent domain analysis effort, the answer was a fusion of popular modeling techniques of both types. By using both functional and object-oriented techniques, the analysts involved were able to lean on their experience in function oriented software development, while taking advantage of the descriptive power available in object oriented models. In addition, a base of familiar modeling methods permitted the group of mostly new domain analysts to learn the details of the domain analysis process while producing a quality product. This paper describes the background of this project and then provides a high level definition of domain analysis. The majority of this paper focuses on the modeling method developed and utilized during this analysis effort.
Managing distribution changes in time series prediction
NASA Astrophysics Data System (ADS)
Matias, J. M.; Gonzalez-Manteiga, W.; Taboada, J.; Ordonez, C.
2006-07-01
When a problem is modeled statistically, a single distribution model is usually postulated that is assumed to be valid for the entire space. Nonetheless, this practice may be somewhat unrealistic in certain application areas, in which the conditions of the process that generates the data may change; as far as we are aware, however, no techniques have been developed to tackle this problem.This article proposes a technique for modeling and predicting this change in time series with a view to improving estimates and predictions. The technique is applied, among other models, to the hypernormal distribution recently proposed. When tested on real data from a range of stock market indices the technique produces better results that when a single distribution model is assumed to be valid for the entire period of time studied.Moreover, when a global model is postulated, it is highly recommended to select the hypernormal distribution parameter in the same likelihood maximization process.
NASA Astrophysics Data System (ADS)
Jonker, C. M.; Snoep, J. L.; Treur, J.; Westerhoff, H. V.; Wijngaards, W. C. A.
Within the areas of Computational Organisation Theory and Artificial Intelligence, techniques have been developed to simulate and analyse dynamics within organisations in society. Usually these modelling techniques are applied to factories and to the internal organisation of their process flows, thus obtaining models of complex organisations at various levels of aggregation. The dynamics in living cells are often interpreted in terms of well-organised processes, a bacterium being considered a (micro)factory. This suggests that organisation modelling techniques may also benefit their analysis. Using the example of Escherichia coli it is shown how indeed agent-based organisational modelling techniques can be used to simulate and analyse E.coli's intracellular dynamics. Exploiting the abstraction levels entailed by this perspective, a concise model is obtained that is readily simulated and analysed at the various levels of aggregation, yet shows the cell's essential dynamic patterns.
Shi, Jun; Wei, Pin-Kang; Zhang, Shen; Qin, Zhi-Feng; Li, Jun; Sun, Da-Zhi; Xiao, Yan; Yu, Zhi-Hong; Lin, Hui-Ming; Zheng, Guo-Jing; Su, Xiao-Mei; Chen, Ya-Lin; Liu, Yan-Fang; Xu, Ling
2008-01-01
AIM: To establish nude mouse human gastric cancer orthotopic transplantation models using OB glue paste technique. METHODS: Using OB glue paste technique, orthotopic transplantation models were established by implanting SGC-7901 and MKN-45 human gastric cancer cell strains into the gastric wall of nude mice. Biological features, growth of the implanted tumors, the success rate of transplantation and the rate of auto-metastasis of the two models were observed. RESULTS: The success rates of orthotopic transplan-tation of the two models were 94.20% and 96%. The rates of hepatic metastasis, pulmonary metastasis, peritoneal metastasis, lymphocytic metastasis and splenic metastasis were 42.13% and 94.20%, 48.43% and 57.97%, 30.83% and 36.96%, 67.30% and 84.06%, and 59.75% and 10.53%, respectively. The occurrence of ascites was 47.80% and 36.96%. CONCLUSION: OB glue paste technique is easy to follow. The biological behaviors of the nude mouse human gastric cancer orthotopic transplantation models established with this technique are similar to the natural processes of growth and metastasis of human gastric cancer, and, therefore, can be used as an ideal model for experimental research of proliferative metastasis of tumors. PMID:18720543
Farr, W. M.; Mandel, I.; Stevens, D.
2015-01-01
Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580
Cardiovascular oscillations: in search of a nonlinear parametric model
NASA Astrophysics Data System (ADS)
Bandrivskyy, Andriy; Luchinsky, Dmitry; McClintock, Peter V.; Smelyanskiy, Vadim; Stefanovska, Aneta; Timucin, Dogan
2003-05-01
We suggest a fresh approach to the modeling of the human cardiovascular system. Taking advantage of a new Bayesian inference technique, able to deal with stochastic nonlinear systems, we show that one can estimate parameters for models of the cardiovascular system directly from measured time series. We present preliminary results of inference of parameters of a model of coupled oscillators from measured cardiovascular data addressing cardiorespiratory interaction. We argue that the inference technique offers a very promising tool for the modeling, able to contribute significantly towards the solution of a long standing challenge -- development of new diagnostic techniques based on noninvasive measurements.
Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition
Fraley, Chris; Percival, Daniel
2014-01-01
Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001
Propulsion simulation for magnetically suspended wind tunnel models
NASA Technical Reports Server (NTRS)
Joshi, Prakash B.; Beerman, Henry P.; Chen, James; Krech, Robert H.; Lintz, Andrew L.; Rosen, David I.
1990-01-01
The feasibility of simulating propulsion-induced aerodynamic effects on scaled aircraft models in wind tunnels employing Magnetic Suspension and Balance Systems. The investigation concerned itself with techniques of generating exhaust jets of appropriate characteristics. The objectives were to: (1) define thrust and mass flow requirements of jets; (2) evaluate techniques for generating propulsive gas within volume limitations imposed by magnetically-suspended models; (3) conduct simple diagnostic experiments for techniques involving new concepts; and (4) recommend experiments for demonstration of propulsion simulation techniques. Various techniques of generating exhaust jets of appropriate characteristics were evaluated on scaled aircraft models in wind tunnels with MSBS. Four concepts of remotely-operated propulsion simulators were examined. Three conceptual designs involving innovative adaptation of convenient technologies (compressed gas cylinders, liquid, and solid propellants) were developed. The fourth innovative concept, namely, the laser-assisted thruster, which can potentially simulate both inlet and exhaust flows, was found to require very high power levels for small thrust levels.
Finite volume model for two-dimensional shallow environmental flow
Simoes, F.J.M.
2011-01-01
This paper presents the development of a two-dimensional, depth integrated, unsteady, free-surface model based on the shallow water equations. The development was motivated by the desire of balancing computational efficiency and accuracy by selective and conjunctive use of different numerical techniques. The base framework of the discrete model uses Godunov methods on unstructured triangular grids, but the solution technique emphasizes the use of a high-resolution Riemann solver where needed, switching to a simpler and computationally more efficient upwind finite volume technique in the smooth regions of the flow. Explicit time marching is accomplished with strong stability preserving Runge-Kutta methods, with additional acceleration techniques for steady-state computations. A simplified mass-preserving algorithm is used to deal with wet/dry fronts. Application of the model is made to several benchmark cases that show the interplay of the diverse solution techniques.
NASA Technical Reports Server (NTRS)
Rocha, Camilo; Meseguer, Jose; Munoz, Cesar A.
2013-01-01
Combining symbolic techniques such as: (i) SMT solving, (ii) rewriting modulo theories, and (iii) model checking can enable the analysis of infinite-state systems outside the scope of each such technique. This paper proposes rewriting modulo SMT as a new technique combining the powers of (i)-(iii) and ideally suited to model and analyze infinite-state open systems; that is, systems that interact with a non-deterministic environment. Such systems exhibit both internal non-determinism due to the system, and external non-determinism due to the environment. They are not amenable to finite-state model checking analysis because they typically are infinite-state. By being reducible to standard rewriting using reflective techniques, rewriting modulo SMT can both naturally model and analyze open systems without requiring any changes to rewriting-based reachability analysis techniques for closed systems. This is illustrated by the analysis of a real-time system beyond the scope of timed automata methods.
A Flexible Hierarchical Bayesian Modeling Technique for Risk Analysis of Major Accidents.
Yu, Hongyang; Khan, Faisal; Veitch, Brian
2017-09-01
Safety analysis of rare events with potentially catastrophic consequences is challenged by data scarcity and uncertainty. Traditional causation-based approaches, such as fault tree and event tree (used to model rare event), suffer from a number of weaknesses. These include the static structure of the event causation, lack of event occurrence data, and need for reliable prior information. In this study, a new hierarchical Bayesian modeling based technique is proposed to overcome these drawbacks. The proposed technique can be used as a flexible technique for risk analysis of major accidents. It enables both forward and backward analysis in quantitative reasoning and the treatment of interdependence among the model parameters. Source-to-source variability in data sources is also taken into account through a robust probabilistic safety analysis. The applicability of the proposed technique has been demonstrated through a case study in marine and offshore industry. © 2017 Society for Risk Analysis.
Kapellusch, Jay M; Bao, Stephen S; Silverstein, Barbara A; Merryweather, Andrew S; Thiese, Mathew S; Hegmann, Kurt T; Garg, Arun
2017-12-01
The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Value for Hand Activity Level (TLV for HAL) use different constituent variables to quantify task physical exposures. Similarly, time-weighted-average (TWA), Peak, and Typical exposure techniques to quantify physical exposure from multi-task jobs make different assumptions about each task's contribution to the whole job exposure. Thus, task and job physical exposure classifications differ depending upon which model and technique are used for quantification. This study examines exposure classification agreement, disagreement, correlation, and magnitude of classification differences between these models and techniques. Data from 710 multi-task job workers performing 3,647 tasks were analyzed using the SI and TLV for HAL models, as well as with the TWA, Typical and Peak job exposure techniques. Physical exposures were classified as low, medium, and high using each model's recommended, or a priori limits. Exposure classification agreement and disagreement between models (SI, TLV for HAL) and between job exposure techniques (TWA, Typical, Peak) were described and analyzed. Regardless of technique, the SI classified more tasks as high exposure than the TLV for HAL, and the TLV for HAL classified more tasks as low exposure. The models agreed on 48.5% of task classifications (kappa = 0.28) with 15.5% of disagreement between low and high exposure categories. Between-technique (i.e., TWA, Typical, Peak) agreement ranged from 61-93% (kappa: 0.16-0.92) depending on whether the SI or TLV for HAL was used. There was disagreement between the SI and TLV for HAL and between the TWA, Typical and Peak techniques. Disagreement creates uncertainty for job design, job analysis, risk assessments, and developing interventions. Task exposure classifications from the SI and TLV for HAL might complement each other. However, TWA, Typical, and Peak job exposure techniques all have limitations. Part II of this article examines whether the observed differences between these models and techniques produce different exposure-response relationships for predicting prevalence of carpal tunnel syndrome.
A Search Technique for Weak and Long-Duration Gamma-Ray Bursts from Background Model Residuals
NASA Technical Reports Server (NTRS)
Skelton, R. T.; Mahoney, W. A.
1993-01-01
We report on a planned search technique for Gamma-Ray Bursts too weak to trigger the on-board threshold. The technique is to search residuals from a physically based background model used for analysis of point sources by the Earth occultation method.
USDA-ARS?s Scientific Manuscript database
Parametric non-linear regression (PNR) techniques commonly are used to develop weed seedling emergence models. Such techniques, however, require statistical assumptions that are difficult to meet. To examine and overcome these limitations, we compared PNR with a nonparametric estimation technique. F...
Summary on several key techniques in 3D geological modeling.
Mei, Gang
2014-01-01
Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.
Visual Modelling of Data Warehousing Flows with UML Profiles
NASA Astrophysics Data System (ADS)
Pardillo, Jesús; Golfarelli, Matteo; Rizzi, Stefano; Trujillo, Juan
Data warehousing involves complex processes that transform source data through several stages to deliver suitable information ready to be analysed. Though many techniques for visual modelling of data warehouses from the static point of view have been devised, only few attempts have been made to model the data flows involved in a data warehousing process. Besides, each attempt was mainly aimed at a specific application, such as ETL, OLAP, what-if analysis, data mining. Data flows are typically very complex in this domain; for this reason, we argue, designers would greatly benefit from a technique for uniformly modelling data warehousing flows for all applications. In this paper, we propose an integrated visual modelling technique for data cubes and data flows. This technique is based on UML profiling; its feasibility is evaluated by means of a prototype implementation.
Dante's Comedy: precursors of psychoanalytic technique and psyche.
Szajnberg, Nathan Moses
2010-02-01
This paper uses a literary approach to explore what common ground exists in both psychoanalytic technique and views of the psyche, of 'person'. While Western literature has developed various views of psyche and person over centuries, there have been crystallizing, seminal portraits, for instance Shakespeare's perspective on what is human, some of which have endured to the present. By using Dante's Commedia, particularly the Inferno, a 14th century poem that both integrates and revises previous models of psyche and personhood, we can examine what features of psyche, and 'techniques' in soul-healing psychoanalysts have inherited culturally. Discovering basic features of technique and model of psyche we share as psychoanalysts permits us to explore why we have differences in variations on technique and models of inner life.
NASA Astrophysics Data System (ADS)
Lukman, Iing; Ibrahim, Noor A.; Daud, Isa B.; Maarof, Fauziah; Hassan, Mohd N.
2002-03-01
Survival analysis algorithm is often applied in the data mining process. Cox regression is one of the survival analysis tools that has been used in many areas, and it can be used to analyze the failure times of aircraft crashed. Another survival analysis tool is the competing risks where we have more than one cause of failure acting simultaneously. Lunn-McNeil analyzed the competing risks in the survival model using Cox regression with censored data. The modified Lunn-McNeil technique is a simplify of the Lunn-McNeil technique. The Kalbfleisch-Prentice technique is involving fitting models separately from each type of failure, treating other failure types as censored. To compare the two techniques, (the modified Lunn-McNeil and Kalbfleisch-Prentice) a simulation study was performed. Samples with various sizes and censoring percentages were generated and fitted using both techniques. The study was conducted by comparing the inference of models, using Root Mean Square Error (RMSE), the power tests, and the Schoenfeld residual analysis. The power tests in this study were likelihood ratio test, Rao-score test, and Wald statistics. The Schoenfeld residual analysis was conducted to check the proportionality of the model through its covariates. The estimated parameters were computed for the cause-specific hazard situation. Results showed that the modified Lunn-McNeil technique was better than the Kalbfleisch-Prentice technique based on the RMSE measurement and Schoenfeld residual analysis. However, the Kalbfleisch-Prentice technique was better than the modified Lunn-McNeil technique based on power tests measurement.
Harris, Ted D.; Graham, Jennifer L.
2017-01-01
Cyanobacterial blooms degrade water quality in drinking water supply reservoirs by producing toxic and taste-and-odor causing secondary metabolites, which ultimately cause public health concerns and lead to increased treatment costs for water utilities. There have been numerous attempts to create models that predict cyanobacteria and their secondary metabolites, most using linear models; however, linear models are limited by assumptions about the data and have had limited success as predictive tools. Thus, lake and reservoir managers need improved modeling techniques that can accurately predict large bloom events that have the highest impact on recreational activities and drinking-water treatment processes. In this study, we compared 12 unique linear and nonlinear regression modeling techniques to predict cyanobacterial abundance and the cyanobacterial secondary metabolites microcystin and geosmin using 14 years of physiochemical water quality data collected from Cheney Reservoir, Kansas. Support vector machine (SVM), random forest (RF), boosted tree (BT), and Cubist modeling techniques were the most predictive of the compared modeling approaches. SVM, RF, and BT modeling techniques were able to successfully predict cyanobacterial abundance, microcystin, and geosmin concentrations <60,000 cells/mL, 2.5 µg/L, and 20 ng/L, respectively. Only Cubist modeling predicted maxima concentrations of cyanobacteria and geosmin; no modeling technique was able to predict maxima microcystin concentrations. Because maxima concentrations are a primary concern for lake and reservoir managers, Cubist modeling may help predict the largest and most noxious concentrations of cyanobacteria and their secondary metabolites.
NASA Technical Reports Server (NTRS)
Howard, S. D.
1987-01-01
Effective user interface design in software systems is a complex task that takes place without adequate modeling tools. By combining state transition diagrams and the storyboard technique of filmmakers, State Transition Storyboards were developed to provide a detailed modeling technique for the Goldstone Solar System Radar Data Acquisition System human-machine interface. Illustrations are included with a description of the modeling technique.
Plasticity models of material variability based on uncertainty quantification techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Reese E.; Rizzi, Francesco; Boyce, Brad
The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQmore » techniques can be used in model selection and assessing the quality of calibrated physical parameters.« less
Carvajal, Thaddeus M; Viacrusis, Katherine M; Hernandez, Lara Fides T; Ho, Howell T; Amalin, Divina M; Watanabe, Kozo
2018-04-17
Several studies have applied ecological factors such as meteorological variables to develop models and accurately predict the temporal pattern of dengue incidence or occurrence. With the vast amount of studies that investigated this premise, the modeling approaches differ from each study and only use a single statistical technique. It raises the question of whether which technique would be robust and reliable. Hence, our study aims to compare the predictive accuracy of the temporal pattern of Dengue incidence in Metropolitan Manila as influenced by meteorological factors from four modeling techniques, (a) General Additive Modeling, (b) Seasonal Autoregressive Integrated Moving Average with exogenous variables (c) Random Forest and (d) Gradient Boosting. Dengue incidence and meteorological data (flood, precipitation, temperature, southern oscillation index, relative humidity, wind speed and direction) of Metropolitan Manila from January 1, 2009 - December 31, 2013 were obtained from respective government agencies. Two types of datasets were used in the analysis; observed meteorological factors (MF) and its corresponding delayed or lagged effect (LG). After which, these datasets were subjected to the four modeling techniques. The predictive accuracy and variable importance of each modeling technique were calculated and evaluated. Among the statistical modeling techniques, Random Forest showed the best predictive accuracy. Moreover, the delayed or lag effects of the meteorological variables was shown to be the best dataset to use for such purpose. Thus, the model of Random Forest with delayed meteorological effects (RF-LG) was deemed the best among all assessed models. Relative humidity was shown to be the top-most important meteorological factor in the best model. The study exhibited that there are indeed different predictive outcomes generated from each statistical modeling technique and it further revealed that the Random forest model with delayed meteorological effects to be the best in predicting the temporal pattern of Dengue incidence in Metropolitan Manila. It is also noteworthy that the study also identified relative humidity as an important meteorological factor along with rainfall and temperature that can influence this temporal pattern.
Application of separable parameter space techniques to multi-tracer PET compartment modeling.
Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J
2016-02-07
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
NASA Astrophysics Data System (ADS)
Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.
2016-02-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Rival approaches to mathematical modelling in immunology
NASA Astrophysics Data System (ADS)
Andrew, Sarah M.; Baker, Christopher T. H.; Bocharov, Gennady A.
2007-08-01
In order to formulate quantitatively correct mathematical models of the immune system, one requires an understanding of immune processes and familiarity with a range of mathematical techniques. Selection of an appropriate model requires a number of decisions to be made, including a choice of the modelling objectives, strategies and techniques and the types of model considered as candidate models. The authors adopt a multidisciplinary perspective.
Modelling Technique for Demonstrating Gravity Collapse Structures in Jointed Rock.
ERIC Educational Resources Information Center
Stimpson, B.
1979-01-01
Described is a base-friction modeling technique for studying the development of collapse structures in jointed rocks. A moving belt beneath weak material is designed to simulate gravity. A description is given of the model frame construction. (Author/SA)
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
An improved switching converter model. Ph.D. Thesis. Final Report
NASA Technical Reports Server (NTRS)
Shortt, D. J.
1982-01-01
The nonlinear modeling and analysis of dc-dc converters in the continuous mode and discontinuous mode was done by averaging and discrete sampling techniques. A model was developed by combining these two techniques. This model, the discrete average model, accurately predicts the envelope of the output voltage and is easy to implement in circuit and state variable forms. The proposed model is shown to be dependent on the type of duty cycle control. The proper selection of the power stage model, between average and discrete average, is largely a function of the error processor in the feedback loop. The accuracy of the measurement data taken by a conventional technique is affected by the conditions at which the data is collected.
Advanced DPSM approach for modeling ultrasonic wave scattering in an arbitrary geometry
NASA Astrophysics Data System (ADS)
Yadav, Susheel K.; Banerjee, Sourav; Kundu, Tribikram
2011-04-01
Several techniques are used to diagnose structural damages. In the ultrasonic technique structures are tested by analyzing ultrasonic signals scattered by damages. The interpretation of these signals requires a good understanding of the interaction between ultrasonic waves and structures. Therefore, researchers need analytical or numerical techniques to have a clear understanding of the interaction between ultrasonic waves and structural damage. However, modeling of wave scattering phenomenon by conventional numerical techniques such as finite element method requires very fine mesh at high frequencies necessitating heavy computational power. Distributed point source method (DPSM) is a newly developed robust mesh free technique to simulate ultrasonic, electrostatic and electromagnetic fields. In most of the previous studies the DPSM technique has been applied to model two dimensional surface geometries and simple three dimensional scatterer geometries. It was difficult to perform the analysis for complex three dimensional geometries. This technique has been extended to model wave scattering in an arbitrary geometry. In this paper a channel section idealized as a thin solid plate with several rivet holes is formulated. The simulation has been carried out with and without cracks near the rivet holes. Further, a comparison study has been also carried out to characterize the crack. A computer code has been developed in C for modeling the ultrasonic field in a solid plate with and without cracks near the rivet holes.
The electromagnetic modeling of thin apertures using the finite-difference time-domain technique
NASA Technical Reports Server (NTRS)
Demarest, Kenneth R.
1987-01-01
A technique which computes transient electromagnetic responses of narrow apertures in complex conducting scatterers was implemented as an extension of previously developed Finite-Difference Time-Domain (FDTD) computer codes. Although these apertures are narrow with respect to the wavelengths contained within the power spectrum of excitation, this technique does not require significantly more computer resources to attain the increased resolution at the apertures. In the report, an analytical technique which utilizes Babinet's principle to model the apertures is developed, and an FDTD computer code which utilizes this technique is described.
MESA: An Interactive Modeling and Simulation Environment for Intelligent Systems Automation
NASA Technical Reports Server (NTRS)
Charest, Leonard
1994-01-01
This report describes MESA, a software environment for creating applications that automate NASA mission opterations. MESA enables intelligent automation by utilizing model-based reasoning techniques developed in the field of Artificial Intelligence. Model-based reasoning techniques are realized in Mesa through native support of causal modeling and discrete event simulation.
The Potential of Growth Mixture Modelling
ERIC Educational Resources Information Center
Muthen, Bengt
2006-01-01
The authors of the paper on growth mixture modelling (GMM) give a description of GMM and related techniques as applied to antisocial behaviour. They bring up the important issue of choice of model within the general framework of mixture modelling, especially the choice between latent class growth analysis (LCGA) techniques developed by Nagin and…
Numerical Modeling of Nonlinear Thermodynamics in SMA Wires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, D R; Kloucek, P
We present a mathematical model describing the thermodynamic behavior of shape memory alloy wires, as well as a computational technique to solve the resulting system of partial differential equations. The model consists of conservation equations based on a new Helmholtz free energy potential. The computational technique introduces a viscosity-based continuation method, which allows the model to handle dynamic applications where the temporally local behavior of solutions is desired. Computational experiments document that this combination of modeling and solution techniques appropriately predicts the thermally- and stress-induced martensitic phase transitions, as well as the hysteretic behavior and production of latent heat associatedmore » with such materials.« less
Microfluidic perfusion culture system for multilayer artery tissue models.
Yamagishi, Yuka; Masuda, Taisuke; Matsusaki, Michiya; Akashi, Mitsuru; Yokoyama, Utako; Arai, Fumihito
2014-11-01
We described an assembly technique and perfusion culture system for constructing artery tissue models. This technique differed from previous studies in that it does not require a solid biodegradable scaffold; therefore, using sheet-like tissues, this technique allowed the facile fabrication of tubular tissues can be used as model. The fabricated artery tissue models had a multilayer structure. The assembly technique and perfusion culture system were applicable to many different sizes of fabricated arteries. The shape of the fabricated artery tissue models was maintained by the perfusion culture system; furthermore, the system reproduced the in vivo environment and allowed mechanical stimulation of the arteries. The multilayer structure of the artery tissue model was observed using fluorescent dyes. The equivalent Young's modulus was measured by applying internal pressure to the multilayer tubular tissues. The aim of this study was to determine whether fabricated artery tissue models maintained their mechanical properties with developing. We demonstrated both the rapid fabrication of multilayer tubular tissues that can be used as model arteries and the measurement of their equivalent Young's modulus in a suitable perfusion culture environment.
NASA Astrophysics Data System (ADS)
Elshambaky, Hossam Talaat
2018-01-01
Owing to the appearance of many global geopotential models, it is necessary to determine the most appropriate model for use in Egyptian territory. In this study, we aim to investigate three global models, namely EGM2008, EIGEN-6c4, and GECO. We use five mathematical transformation techniques, i.e., polynomial expression, exponential regression, least-squares collocation, multilayer feed forward neural network, and radial basis neural networks to make the conversion from regional geometrical geoid to global geoid models and vice versa. From a statistical comparison study based on quality indexes between previous transformation techniques, we confirm that the multilayer feed forward neural network with two neurons is the most accurate of the examined transformation technique, and based on the mean tide condition, EGM2008 represents the most suitable global geopotential model for use in Egyptian territory to date. The final product gained from this study was the corrector surface that was used to facilitate the transformation process between regional geometrical geoid model and the global geoid model.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2014-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan Walker
2015-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
Time series modeling in traffic safety research.
Lavrenz, Steven M; Vlahogianni, Eleni I; Gkritza, Konstantina; Ke, Yue
2018-08-01
The use of statistical models for analyzing traffic safety (crash) data has been well-established. However, time series techniques have traditionally been underrepresented in the corresponding literature, due to challenges in data collection, along with a limited knowledge of proper methodology. In recent years, new types of high-resolution traffic safety data, especially in measuring driver behavior, have made time series modeling techniques an increasingly salient topic of study. Yet there remains a dearth of information to guide analysts in their use. This paper provides an overview of the state of the art in using time series models in traffic safety research, and discusses some of the fundamental techniques and considerations in classic time series modeling. It also presents ongoing and future opportunities for expanding the use of time series models, and explores newer modeling techniques, including computational intelligence models, which hold promise in effectively handling ever-larger data sets. The information contained herein is meant to guide safety researchers in understanding this broad area of transportation data analysis, and provide a framework for understanding safety trends that can influence policy-making. Copyright © 2017 Elsevier Ltd. All rights reserved.
Numerical model updating technique for structures using firefly algorithm
NASA Astrophysics Data System (ADS)
Sai Kubair, K.; Mohan, S. C.
2018-03-01
Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.
A hybrid SEA/modal technique for modeling structural-acoustic interior noise in rotorcraft.
Jayachandran, V; Bonilha, M W
2003-03-01
This paper describes a hybrid technique that combines Statistical Energy Analysis (SEA) predictions for structural vibration with acoustic modal summation techniques to predict interior noise levels in rotorcraft. The method was applied for predicting the sound field inside a mock-up of the interior panel system of the Sikorsky S-92 helicopter. The vibration amplitudes of the frame and panel systems were predicted using a detailed SEA model and these were used as inputs to the model of the interior acoustic space. The spatial distribution of the vibration field on individual panels, and their coupling to the acoustic space were modeled using stochastic techniques. Leakage and nonresonant transmission components were accounted for using space-averaged values obtained from a SEA model of the complete structural-acoustic system. Since the cabin geometry was quite simple, the modeling of the interior acoustic space was performed using a standard modal summation technique. Sound pressure levels predicted by this approach at specific microphone locations were compared with measured data. Agreement within 3 dB in one-third octave bands above 40 Hz was observed. A large discrepancy in the one-third octave band in which the first acoustic mode is resonant (31.5 Hz) was observed. Reasons for such a discrepancy are discussed in the paper. The developed technique provides a method for modeling helicopter cabin interior noise in the frequency mid-range where neither FEA nor SEA is individually effective or accurate.
ERIC Educational Resources Information Center
Crane, Loren D.
This paper describes six specific techniques that speech communication students may use in rehearsals to improve memory, to increase delivery skills, and to reduce speech stress. The techniques are idea association, covert modeling, desensitization, language elaboration, overt modeling, and self-regulation. Recent research is reviewed that…
Optimizaton of corrosion control for lead in drinking water using computational modeling techniques
Computational modeling techniques have been used to very good effect in the UK in the optimization of corrosion control for lead in drinking water. A “proof-of-concept” project with three US/CA case studies sought to demonstrate that such techniques could work equally well in the...
Vibrato in Singing Voice: The Link between Source-Filter and Sinusoidal Models
NASA Astrophysics Data System (ADS)
Arroabarren, Ixone; Carlosena, Alfonso
2004-12-01
The application of inverse filtering techniques for high-quality singing voice analysis/synthesis is discussed. In the context of source-filter models, inverse filtering provides a noninvasive method to extract the voice source, and thus to study voice quality. Although this approach is widely used in speech synthesis, this is not the case in singing voice. Several studies have proved that inverse filtering techniques fail in the case of singing voice, the reasons being unclear. In order to shed light on this problem, we will consider here an additional feature of singing voice, not present in speech: the vibrato. Vibrato has been traditionally studied by sinusoidal modeling. As an alternative, we will introduce here a novel noninteractive source filter model that incorporates the mechanisms of vibrato generation. This model will also allow the comparison of the results produced by inverse filtering techniques and by sinusoidal modeling, as they apply to singing voice and not to speech. In this way, the limitations of these conventional techniques, described in previous literature, will be explained. Both synthetic signals and singer recordings are used to validate and compare the techniques presented in the paper.
NASA Astrophysics Data System (ADS)
Sahoo, Sasmita; Jha, Madan K.
2013-12-01
The potential of multiple linear regression (MLR) and artificial neural network (ANN) techniques in predicting transient water levels over a groundwater basin were compared. MLR and ANN modeling was carried out at 17 sites in Japan, considering all significant inputs: rainfall, ambient temperature, river stage, 11 seasonal dummy variables, and influential lags of rainfall, ambient temperature, river stage and groundwater level. Seventeen site-specific ANN models were developed, using multi-layer feed-forward neural networks trained with Levenberg-Marquardt backpropagation algorithms. The performance of the models was evaluated using statistical and graphical indicators. Comparison of the goodness-of-fit statistics of the MLR models with those of the ANN models indicated that there is better agreement between the ANN-predicted groundwater levels and the observed groundwater levels at all the sites, compared to the MLR. This finding was supported by the graphical indicators and the residual analysis. Thus, it is concluded that the ANN technique is superior to the MLR technique in predicting spatio-temporal distribution of groundwater levels in a basin. However, considering the practical advantages of the MLR technique, it is recommended as an alternative and cost-effective groundwater modeling tool.
NASA Astrophysics Data System (ADS)
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model
NASA Astrophysics Data System (ADS)
Arumugam, S.; Libera, D.
2017-12-01
Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.
Modeling and Analysis of Power Processing Systems (MAPPS). Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Lee, F. C.; Rahman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.
1980-01-01
Computer aided design and analysis techniques were applied to power processing equipment. Topics covered include: (1) discrete time domain analysis of switching regulators for performance analysis; (2) design optimization of power converters using augmented Lagrangian penalty function technique; (3) investigation of current-injected multiloop controlled switching regulators; and (4) application of optimization for Navy VSTOL energy power system. The generation of the mathematical models and the development and application of computer aided design techniques to solve the different mathematical models are discussed. Recommendations are made for future work that would enhance the application of the computer aided design techniques for power processing systems.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
Numerical modeling of pollutant transport using a Lagrangian marker particle technique
NASA Technical Reports Server (NTRS)
Spaulding, M.
1976-01-01
A derivation and code were developed for the three-dimensional mass transport equation, using a particle-in-cell solution technique, to solve coastal zone waste discharge problems where particles are a major component of the waste. Improvements in the particle movement techniques are suggested and typical examples illustrated. Preliminary model comparisons with analytic solutions for an instantaneous point release in a uniform flow show good results in resolving the waste motion. The findings to date indicate that this computational model will provide a useful technique to study the motion of sediment, dredged spoils, and other particulate waste commonly deposited in coastal waters.
Song, Mingzhi; Zhang, Zhen; Lu, Ming; Zong, Junwei; Dong, Chao; Ma, Kai; Wang, Shouyu
2014-08-09
Lateral mass screw fixation (LSF) techniques have been widely used for reconstructing and stabilizing the cervical spine; however, complications may result depending on the choice of surgeon. There are only a few reports related to LSF applications, even though fracture fixation has become a severe complication. This study establishes the three-dimensional finite element model of the lower cervical spine, and compares the stress distribution of the four LSF techniques (Magerl, Roy-Camille, Anderson, and An), following laminectomy -- to explore the risks of rupture after fixation. CT scans were performed on a healthy adult female volunteer, and Digital imaging and communication in medicine (Dicom) data was obtained. Mimics 10.01, Geomagic Studio 12.0, Solidworks 2012, HyperMesh 10.1 and Abaqus 6.12 software programs were used to establish the intact model of the lower cervical spines (C3-C7), a postoperative model after laminectomy, and a reconstructive model after applying the LSF techniques. A compressive preload of 74 N combined with a pure moment of 1.8 Nm was applied to the intact and reconstructive model, simulating normal flexion, extension, lateral bending, and axial rotation. The stress distribution of the four LSF techniques was compared by analyzing the maximum von Mises stress. The three-dimensional finite element model of the intact C3-C7 vertebrae was successfully established. This model consists of 503,911 elements and 93,390 nodes. During flexion, extension, lateral bending, and axial rotation modes, the intact model's angular intersegmental range of motion was in good agreement with the results reported from the literature. The postoperative model after the three-segment laminectomy and the reconstructive model after applying the four LSF techniques were established based on the validated intact model. The stress distribution for the Magerl and Roy-Camille groups were more dispersive, and the maximum von Mises stress levels were lower than the other two groups in various conditions. The LSF techniques of Magerl and Roy-Camille are safer methods for stabilizing the lower cervical spine. Therefore, these methods potentially have a lower risk of fixation fracture.
Kapellusch, Jay M; Silverstein, Barbara A; Bao, Stephen S; Thiese, Mathew S; Merryweather, Andrew S; Hegmann, Kurt T; Garg, Arun
2018-02-01
The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value for hand activity level (TLV for HAL) have been shown to be associated with prevalence of distal upper-limb musculoskeletal disorders such as carpal tunnel syndrome (CTS). The SI and TLV for HAL disagree on more than half of task exposure classifications. Similarly, time-weighted average (TWA), peak, and typical exposure techniques used to quantity physical exposure from multi-task jobs have shown between-technique agreement ranging from 61% to 93%, depending upon whether the SI or TLV for HAL model was used. This study compared exposure-response relationships between each model-technique combination and prevalence of CTS. Physical exposure data from 1,834 workers (710 with multi-task jobs) were analyzed using the SI and TLV for HAL and the TWA, typical, and peak multi-task job exposure techniques. Additionally, exposure classifications from the SI and TLV for HAL were combined into a single measure and evaluated. Prevalent CTS cases were identified using symptoms and nerve-conduction studies. Mixed effects logistic regression was used to quantify exposure-response relationships between categorized (i.e., low, medium, and high) physical exposure and CTS prevalence for all model-technique combinations, and for multi-task workers, mono-task workers, and all workers combined. Except for TWA TLV for HAL, all model-technique combinations showed monotonic increases in risk of CTS with increased physical exposure. The combined-models approach showed stronger association than the SI or TLV for HAL for multi-task workers. Despite differences in exposure classifications, nearly all model-technique combinations showed exposure-response relationships with prevalence of CTS for the combined sample of mono-task and multi-task workers. Both the TLV for HAL and the SI, with the TWA or typical techniques, appear useful for epidemiological studies and surveillance. However, the utility of TWA, typical, and peak techniques for job design and intervention is dubious.
Flexible multibody simulation of automotive systems with non-modal model reduction techniques
NASA Astrophysics Data System (ADS)
Shiiba, Taichi; Fehr, Jörg; Eberhard, Peter
2012-12-01
The stiffness of the body structure of an automobile has a strong relationship with its noise, vibration, and harshness (NVH) characteristics. In this paper, the effect of the stiffness of the body structure upon ride quality is discussed with flexible multibody dynamics. In flexible multibody simulation, the local elastic deformation of the vehicle has been described traditionally with modal shape functions. Recently, linear model reduction techniques from system dynamics and mathematics came into the focus to find more sophisticated elastic shape functions. In this work, the NVH-relevant states of a racing kart are simulated, whereas the elastic shape functions are calculated with modern model reduction techniques like moment matching by projection on Krylov-subspaces, singular value decomposition-based reduction techniques, and combinations of those. The whole elastic multibody vehicle model consisting of tyres, steering, axle, etc. is considered, and an excitation with a vibration characteristics in a wide frequency range is evaluated in this paper. The accuracy and the calculation performance of those modern model reduction techniques is investigated including a comparison of the modal reduction approach.
Nadeau-Fredette, Annie-Claire; Hawley, Carmel M.; Pascoe, Elaine M.; Chan, Christopher T.; Clayton, Philip A.; Polkinghorne, Kevan R.; Boudville, Neil; Leblanc, Martine
2015-01-01
Background and objectives Home dialysis is often recognized as a first-choice therapy for patients initiating dialysis. However, studies comparing clinical outcomes between peritoneal dialysis and home hemodialysis have been very limited. Design, setting, participants, & measurements This Australia and New Zealand Dialysis and Transplantation Registry study assessed all Australian and New Zealand adult patients receiving home dialysis on day 90 after initiation of RRT between 2000 and 2012. The primary outcome was overall survival. The secondary outcomes were on-treatment survival, patient and technique survival, and death-censored technique survival. All results were adjusted with three prespecified models: multivariable Cox proportional hazards model (main model), propensity score quintile–stratified model, and propensity score–matched model. Results The study included 10,710 patients on incident peritoneal dialysis and 706 patients on incident home hemodialysis. Treatment with home hemodialysis was associated with better patient survival than treatment with peritoneal dialysis (5-year survival: 85% versus 44%, respectively; log-rank P<0.001). Using multivariable Cox proportional hazards analysis, home hemodialysis was associated with superior patient survival (hazard ratio for overall death, 0.47; 95% confidence interval, 0.38 to 0.59) as well as better on-treatment survival (hazard ratio for on-treatment death, 0.34; 95% confidence interval, 0.26 to 0.45), composite patient and technique survival (hazard ratio for death or technique failure, 0.34; 95% confidence interval, 0.29 to 0.40), and death-censored technique survival (hazard ratio for technique failure, 0.34; 95% confidence interval, 0.28 to 0.41). Similar results were obtained with the propensity score models as well as sensitivity analyses using competing risks models and different definitions for technique failure and lag period after modality switch, during which events were attributed to the initial modality. Conclusions Home hemodialysis was associated with superior patient and technique survival compared with peritoneal dialysis. PMID:26068181
Laufer, Shlomi; D'Angelo, Anne-Lise D; Kwan, Calvin; Ray, Rebbeca D; Yudkowsky, Rachel; Boulet, John R; McGaghie, William C; Pugh, Carla M
2017-12-01
Develop new performance evaluation standards for the clinical breast examination (CBE). There are several, technical aspects of a proper CBE. Our recent work discovered a significant, linear relationship between palpation force and CBE accuracy. This article investigates the relationship between other technical aspects of the CBE and accuracy. This performance assessment study involved data collection from physicians (n = 553) attending 3 different clinical meetings between 2013 and 2014: American Society of Breast Surgeons, American Academy of Family Physicians, and American College of Obstetricians and Gynecologists. Four, previously validated, sensor-enabled breast models were used for clinical skills assessment. Models A and B had solitary, superficial, 2 cm and 1 cm soft masses, respectively. Models C and D had solitary, deep, 2 cm hard and moderately firm masses, respectively. Finger movements (search technique) from 1137 CBE video recordings were independently classified by 2 observers. Final classifications were compared with CBE accuracy. Accuracy rates were model A = 99.6%, model B = 89.7%, model C = 75%, and model D = 60%. Final classification categories for search technique included rubbing movement, vertical movement, piano fingers, and other. Interrater reliability was (k = 0.79). Rubbing movement was 4 times more likely to yield an accurate assessment (odds ratio 3.81, P < 0.001) compared with vertical movement and piano fingers. Piano fingers had the highest failure rate (36.5%). Regression analysis of search pattern, search technique, palpation force, examination time, and 6 demographic variables, revealed that search technique independently and significantly affected CBE accuracy (P < 0.001). Our results support measurement and classification of CBE techniques and provide the foundation for a new paradigm in teaching and assessing hands-on clinical skills. The newly described piano fingers palpation technique was noted to have unusually high failure rates. Medical educators should be aware of the potential differences in effectiveness for various CBE techniques.
NASA Astrophysics Data System (ADS)
Hirt, Christian; Reußner, Elisabeth; Rexer, Moritz; Kuhn, Michael
2016-09-01
Over the past years, spectral techniques have become a standard to model Earth's global gravity field to 10 km scales, with the EGM2008 geopotential model being a prominent example. For some geophysical applications of EGM2008, particularly Bouguer gravity computation with spectral techniques, a topographic potential model of adequate resolution is required. However, current topographic potential models have not yet been successfully validated to degree 2160, and notable discrepancies between spectral modeling and Newtonian (numerical) integration well beyond the 10 mGal level have been reported. Here we accurately compute and validate gravity implied by a degree 2160 model of Earth's topographic masses. Our experiments are based on two key strategies, both of which require advanced computational resources. First, we construct a spectrally complete model of the gravity field which is generated by the degree 2160 Earth topography model. This involves expansion of the topographic potential to the 15th integer power of the topography and modeling of short-scale gravity signals to ultrahigh degree of 21,600, translating into unprecedented fine scales of 1 km. Second, we apply Newtonian integration in the space domain with high spatial resolution to reduce discretization errors. Our numerical study demonstrates excellent agreement (8 μGgal RMS) between gravity from both forward modeling techniques and provides insight into the convergence process associated with spectral modeling of gravity signals at very short scales (few km). As key conclusion, our work successfully validates the spectral domain forward modeling technique for degree 2160 topography and increases the confidence in new high-resolution global Bouguer gravity maps.
Symbolically Modeling Concurrent MCAPI Executions
NASA Technical Reports Server (NTRS)
Fischer, Topher; Mercer, Eric; Rungta, Neha
2011-01-01
Improper use of Inter-Process Communication (IPC) within concurrent systems often creates data races which can lead to bugs that are challenging to discover. Techniques that use Satisfiability Modulo Theories (SMT) problems to symbolically model possible executions of concurrent software have recently been proposed for use in the formal verification of software. In this work we describe a new technique for modeling executions of concurrent software that use a message passing API called MCAPI. Our technique uses an execution trace to create an SMT problem that symbolically models all possible concurrent executions and follows the same sequence of conditional branch outcomes as the provided execution trace. We check if there exists a satisfying assignment to the SMT problem with respect to specific safety properties. If such an assignment exists, it provides the conditions that lead to the violation of the property. We show how our method models behaviors of MCAPI applications that are ignored in previously published techniques.
Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers
NASA Astrophysics Data System (ADS)
Caballero Morales, Santiago Omar; Cox, Stephen J.
2009-12-01
Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.
NASA Astrophysics Data System (ADS)
Makahinda, T.
2018-02-01
The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.
Summary on Several Key Techniques in 3D Geological Modeling
2014-01-01
Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized. PMID:24772029
NASA Astrophysics Data System (ADS)
Sari, N. M.; Nugroho, J. T.; Chulafak, G. A.; Kushardono, D.
2018-05-01
Coastal is an ecosystem that has unique object and phenomenon. The potential of the aerial photo data with very high spatial resolution covering coastal area is extensive. One of the aerial photo data can be used is LAPAN Surveillance UAV 02 (LSU-02) photo data which is acquired in 2016 with a spatial resolution reaching 10cm. This research aims to create an initial bathymetry model with stereo photogrammetry technique using LSU-02 data. In this research the bathymetry model was made by constructing 3D model with stereo photogrammetry technique that utilizes the dense point cloud created from overlapping of those photos. The result shows that the 3D bathymetry model can be built with stereo photogrammetry technique. It can be seen from the surface and bathymetry transect profile.
Multi-Level Reduced Order Modeling Equipped with Probabilistic Error Bounds
NASA Astrophysics Data System (ADS)
Abdo, Mohammad Gamal Mohammad Mostafa
This thesis develops robust reduced order modeling (ROM) techniques to achieve the needed efficiency to render feasible the use of high fidelity tools for routine engineering analyses. Markedly different from the state-of-the-art ROM techniques, our work focuses only on techniques which can quantify the credibility of the reduction which can be measured with the reduction errors upper-bounded for the envisaged range of ROM model application. Our objective is two-fold. First, further developments of ROM techniques are proposed when conventional ROM techniques are too taxing to be computationally practical. This is achieved via a multi-level ROM methodology designed to take advantage of the multi-scale modeling strategy typically employed for computationally taxing models such as those associated with the modeling of nuclear reactor behavior. Second, the discrepancies between the original model and ROM model predictions over the full range of model application conditions are upper-bounded in a probabilistic sense with high probability. ROM techniques may be classified into two broad categories: surrogate construction techniques and dimensionality reduction techniques, with the latter being the primary focus of this work. We focus on dimensionality reduction, because it offers a rigorous approach by which reduction errors can be quantified via upper-bounds that are met in a probabilistic sense. Surrogate techniques typically rely on fitting a parametric model form to the original model at a number of training points, with the residual of the fit taken as a measure of the prediction accuracy of the surrogate. This approach, however, does not generally guarantee that the surrogate model predictions at points not included in the training process will be bound by the error estimated from the fitting residual. Dimensionality reduction techniques however employ a different philosophy to render the reduction, wherein randomized snapshots of the model variables, such as the model parameters, responses, or state variables, are projected onto lower dimensional subspaces, referred to as the "active subspaces", which are selected to capture a user-defined portion of the snapshots variations. Once determined, the ROM model application involves constraining the variables to the active subspaces. In doing so, the contribution from the variables discarded components can be estimated using a fundamental theorem from random matrix theory which has its roots in Dixon's theory, developed in 1983. This theory was initially presented for linear matrix operators. The thesis extends this theorem's results to allow reduction of general smooth nonlinear operators. The result is an approach by which the adequacy of a given active subspace determined using a given set of snapshots, generated either using the full high fidelity model, or other models with lower fidelity, can be assessed, which provides insight to the analyst on the type of snapshots required to reach a reduction that can satisfy user-defined preset tolerance limits on the reduction errors. Reactor physics calculations are employed as a test bed for the proposed developments. The focus will be on reducing the effective dimensionality of the various data streams such as the cross-section data and the neutron flux. The developed methods will be applied to representative assembly level calculations, where the size of the cross-section and flux spaces are typically large, as required by downstream core calculations, in order to capture the broad range of conditions expected during reactor operation. (Abstract shortened by ProQuest.).
Singh, Jay; Chattterjee, Kalyan; Vishwakarma, C B
2018-01-01
Load frequency controller has been designed for reduced order model of single area and two-area reheat hydro-thermal power system through internal model control - proportional integral derivative (IMC-PID) control techniques. The controller design method is based on two degree of freedom (2DOF) internal model control which combines with model order reduction technique. Here, in spite of taking full order system model a reduced order model has been considered for 2DOF-IMC-PID design and the designed controller is directly applied to full order system model. The Logarithmic based model order reduction technique is proposed to reduce the single and two-area high order power systems for the application of controller design.The proposed IMC-PID design of reduced order model achieves good dynamic response and robustness against load disturbance with the original high order system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Steady-state, lumped-parameter model for capacitor-run, single-phase induction motors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umans, S.D.
1996-01-01
This paper documents a technique for deriving a steady-state, lumped-parameter model for capacitor-run, single-phase induction motors. The objective of this model is to predict motor performance parameters such as torque, loss distribution, and efficiency as a function of applied voltage and motor speed as well as the temperatures of the stator windings and of the rotor. The model includes representations of both the main and auxiliary windings (including arbitrary external impedances) and also the effects of core and rotational losses. The technique can be easily implemented and the resultant model can be used in a wide variety of analyses tomore » investigate motor performance as a function of load, speed, and winding and rotor temperatures. The technique is based upon a coupled-circuit representation of the induction motor. A notable feature of the model is the technique used for representing core loss. In equivalent-circuit representations of transformers and induction motors, core loss is typically represented by a core-loss resistance in shunt with the magnetizing inductance. In order to maintain the coupled-circuit viewpoint adopted in this paper, this technique was modified slightly; core loss is represented by a set of core-loss resistances connected to the ``secondaries`` of a set of windings which perfectly couple to the air-gap flux of the motor. An example of the technique is presented based upon a 3.5 kW, single-phase, capacitor-run motor and the validity of the technique is demonstrated by comparing predicted and measured motor performance.« less
Numerical simulation of coupled electrochemical and transport processes in battery systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liaw, B.Y.; Gu, W.B.; Wang, C.Y.
1997-12-31
Advanced numerical modeling to simulate dynamic battery performance characteristics for several types of advanced batteries is being conducted using computational fluid dynamics (CFD) techniques. The CFD techniques provide efficient algorithms to solve a large set of highly nonlinear partial differential equations that represent the complex battery behavior governed by coupled electrochemical reactions and transport processes. The authors have recently successfully applied such techniques to model advanced lead-acid, Ni-Cd and Ni-MH cells. In this paper, the authors briefly discuss how the governing equations were numerically implemented, show some preliminary modeling results, and compare them with other modeling or experimental data reportedmore » in the literature. The authors describe the advantages and implications of using the CFD techniques and their capabilities in future battery applications.« less
Purevsuren, Tserenchimed; Batbaatar, Myagmarbayar; Khuyagbaatar, Batbayar; Kim, Kyungsoo; Kim, Yoon Hyuk
2018-03-12
Biomechanical studies have indicated that the conventional non-anatomic reconstruction techniques for lateral ankle sprain (LAS) tend to restrict subtalar joint motion compared to intact ankle joints. Excessive restriction in subtalar motion may lead to chronic pain, functional difficulties, and development of osteoarthritis. Therefore, various anatomic surgical techniques to reconstruct both the anterior talofibular and calcaneofibular ligaments have been introduced. In this study, ankle joint stability was evaluated using multibody computational ankle joint model to assess two new anatomic reconstruction and three popular non-anatomic reconstruction techniques. An LAS injury, three popular non-anatomic reconstruction models (Watson-Jones, Evans, and Chrisman-Snook), and two common types of anatomic reconstruction models were developed based on the intact ankle model. The stability of ankle in both talocrural and subtalar joint were evaluated under anterior drawer test (150 N anterior force), inversion test (3 Nm inversion moment), internal rotational test (3 Nm internal rotation moment), and the combined loading test (9 Nm inversion and internal moment as well as 1800 N compressive force). Our overall results show that the two anatomic reconstruction techniques were superior to the non-anatomic reconstruction techniques in stabilizing both talocrural and subtalar joints. Restricted subtalar joint motion, which mainly observed in Watson-Jones and Chrisman-Snook techniques, was not shown in the anatomical reconstructions. Evans technique was beneficial for subtalar joint as it does not restrict subtalar motion, though Evans technique was insufficient for restoring talocrural joint inversion. The anatomical reconstruction techniques best recovered ankle stability.
A routinely applied atmospheric dispersion model was modified to evaluate alternative modeling techniques which allowed for more detailed source data, onsite meteorological data, and several dispersion methodologies. These were evaluated with hourly SO2 concentrations measured at...
Fienen, Michael N.; Nolan, Bernard T.; Feinstein, Daniel T.
2016-01-01
For decision support, the insights and predictive power of numerical process models can be hampered by insufficient expertise and computational resources required to evaluate system response to new stresses. An alternative is to emulate the process model with a statistical “metamodel.” Built on a dataset of collocated numerical model input and output, a groundwater flow model was emulated using a Bayesian Network, an Artificial neural network, and a Gradient Boosted Regression Tree. The response of interest was surface water depletion expressed as the source of water-to-wells. The results have application for managing allocation of groundwater. Each technique was tuned using cross validation and further evaluated using a held-out dataset. A numerical MODFLOW-USG model of the Lake Michigan Basin, USA, was used for the evaluation. The performance and interpretability of each technique was compared pointing to advantages of each technique. The metamodel can extend to unmodeled areas.
Three-dimensional accuracy of different impression techniques for dental implants
Nakhaei, Mohammadreza; Madani, Azam S; Moraditalab, Azizollah; Haghi, Hamidreza Rajati
2015-01-01
Background: Accurate impression making is an essential prerequisite for achieving a passive fit between the implant and the superstructure. The aim of this in vitro study was to compare the three-dimensional accuracy of open-tray and three closed-tray impression techniques. Materials and Methods: Three acrylic resin mandibular master models with four parallel implants were used: Biohorizons (BIO), Straumann tissue-level (STL), and Straumann bone-level (SBL). Forty-two putty/wash polyvinyl siloxane impressions of the models were made using open-tray and closed-tray techniques. Closed-tray impressions were made using snap-on (STL model), transfer coping (TC) (BIO model) and TC plus plastic cap (TC-Cap) (SBL model). The impressions were poured with type IV stone, and the positional accuracy of the implant analog heads in each dimension (x, y and z axes), and the linear displacement (ΔR) were evaluated using a coordinate measuring machine. Data were analyzed using ANOVA and post-hoc Tukey tests (α = 0.05). Results: The ΔR values of the snap-on technique were significantly lower than those of TC and TC-Cap techniques (P < 0.001). No significant differences were found between closed and open impression techniques for STL in Δx, Δy, Δz and ΔR values (P = 0.444, P = 0.181, P = 0.835 and P = 0.911, respectively). Conclusion: Considering the limitations of this study, the snap-on implant-level impression technique resulted in more three-dimensional accuracy than TC and TC-Cap, but it was similar to the open-tray technique. PMID:26604956
Weighted least squares techniques for improved received signal strength based localization.
Tarrío, Paula; Bernardos, Ana M; Casar, José R
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.
Politis, Argyris; Schmidt, Carla
2018-03-20
Structural mass spectrometry with its various techniques is a powerful tool for the structural elucidation of medically relevant protein assemblies. It delivers information on the composition, stoichiometries, interactions and topologies of these assemblies. Most importantly it can deal with heterogeneous mixtures and assemblies which makes it universal among the conventional structural techniques. In this review we summarise recent advances and challenges in structural mass spectrometric techniques. We describe how the combination of the different mass spectrometry-based methods with computational strategies enable structural models at molecular levels of resolution. These models hold significant potential for helping us in characterizing the function of protein assemblies related to human health and disease. In this review we summarise the techniques of structural mass spectrometry often applied when studying protein-ligand complexes. We exemplify these techniques through recent examples from literature that helped in the understanding of medically relevant protein assemblies. We further provide a detailed introduction into various computational approaches that can be integrated with these mass spectrometric techniques. Last but not least we discuss case studies that integrated mass spectrometry and computational modelling approaches and yielded models of medically important protein assembly states such as fibrils and amyloids. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization
Tarrío, Paula; Bernardos, Ana M.; Casar, José R.
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092
Improved Slip Casting Of Ceramic Models
NASA Technical Reports Server (NTRS)
Buck, Gregory M.; Vasquez, Peter; Hicks, Lana P.
1994-01-01
Improved technique of investment slip casting developed for making precise ceramic wind-tunnel models. Needed in wind-tunnel experiments to verify predictions of aerothermodynamical computer codes. Ceramic materials used because of their low heat conductivities and ability to survive high temperatures. Present improved slip-casting technique enables casting of highly detailed models from aqueous or nonaqueous solutions. Wet shell molds peeled off models to ensure precise and undamaged details. Used at NASA Langley Research Center to form superconducting ceramic components from nonaqueous slip solutions. Technique has many more applications when ceramic materials developed further for such high-strength/ temperature components as engine parts.
Gebhard, Harry; James, Andrew R.; Bowles, Robby D.; Dyke, Jonathan P.; Saleh, Tatianna; Doty, Stephen P.; Bonassar, Lawrence J.; Härtl, Roger
2011-01-01
Study design: Prospective randomized animal study. Objective: To determine a surgical technique for reproducible and functional intervertebral disc replacement in an orthotopic animal model. Methods: The caudal 3/4 intervertebral disc (IVD) of the rat tail was approached by two surgical techniques: blunt dissection, stripping and retracting (Technique 1) or incising and repairing (Technique 2) the dorsal longitudinal tendons. The intervertebral disc was dissected and removed, and then either discarded or reinserted. Outcome measures were perioperative complications, spontaneous tail movement, 7T MRI (T1- and T2-sequences for measurement of disc space height (DSH) and disc hydration). Microcomputed tomographic imaging (micro CT) was additionally performed postmortem. Results: No vascular injuries occurred and no systemic or local infections were observed over the course of 1 month. Tail movements were maintained. With tendon retraction (Technique 1) gross loss of DSH occurred with both discectomy and reinsertion. Tendon division (Technique 2) maintained DSH with IVD reinsertion but not without. The DSH was demonstrated on MRI measurement. A new scoring system to assess IVD appearances was described. Conclusions: The rat tail model, with a tendon dividing surgical technique, can function as an orthotopic animal model for IVD research. Mechanical stimulation is maintained by preserved tail movements. 7T MRI is a feasible modality for longitudinal monitoring for the rat caudal disc. PMID:22956934
3D Modeling Techniques for Print and Digital Media
NASA Astrophysics Data System (ADS)
Stephens, Megan Ashley
In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.
Rapid Model Fabrication and Testing for Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Buck, Gregory M.
2000-01-01
Advanced methods for rapid fabrication and instrumentation of hypersonic wind tunnel models are being developed and evaluated at NASA Langley Research Center. Rapid aeroheating model fabrication and measurement techniques using investment casting of ceramic test models and thermographic phosphors are reviewed. More accurate model casting techniques for fabrication of benchmark metal and ceramic test models are being developed using a combination of rapid prototype patterns and investment casting. White light optical scanning is used for coordinate measurements to evaluate the fabrication process and verify model accuracy to +/- 0.002 inches. Higher-temperature (<210C) luminescent coatings are also being developed for simultaneous pressure and temperature mapping, providing global pressure as well as global aeroheating measurements. Together these techniques will provide a more rapid and complete experimental aerodynamic and aerothermodynamic database for future aerospace vehicles.
Continuum Modeling of Inductor Hysteresis and Eddy Current Loss Effects in Resonant Circuits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pries, Jason L.; Tang, Lixin; Burress, Timothy A.
This paper presents experimental validation of a high-fidelity toroid inductor modeling technique. The aim of this research is to accurately model the instantaneous magnetization state and core losses in ferromagnetic materials. Quasi–static hysteresis effects are captured using a Preisach model. Eddy currents are included by coupling the associated quasi-static Everett function to a simple finite element model representing the inductor cross sectional area. The modeling technique is validated against the nonlinear frequency response from two different series RLC resonant circuits using inductors made of electrical steel and soft ferrite. The method is shown to accurately model shifts in resonant frequencymore » and quality factor. The technique also successfully predicts a discontinuity in the frequency response of the ferrite inductor resonant circuit.« less
Application of the Shell/3D Modeling Technique for the Analysis of Skin-Stiffener Debond Specimens
NASA Technical Reports Server (NTRS)
Krueger, Ronald; O'Brien, T. Kevin; Minguet, Pierre J.
2002-01-01
The application of a shell/3D modeling technique for the simulation of skin/stringer debond in a specimen subjected to three-point bending is demonstrated. The global structure was modeled with shell elements. A local three-dimensional model, extending to about three specimen thicknesses on either side of the delamination front was used to capture the details of the damaged section. Computed total strain energy release rates and mixed-mode ratios obtained from shell/13D simulations were in good agreement with results obtained from full solid models. The good correlations of the results demonstrated the effectiveness of the shell/3D modeling technique for the investigation of skin/stiffener separation due to delamination in the adherents.
Energy and technology review: Engineering modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabayan, H.S.; Goudreau, G.L.; Ziolkowski, R.W.
1986-10-01
This report presents information concerning: Modeling Canonical Problems in Electromagnetic Coupling Through Apertures; Finite-Element Codes for Computing Electrostatic Fields; Finite-Element Modeling of Electromagnetic Phenomena; Modeling Microwave-Pulse Compression in a Resonant Cavity; Lagrangian Finite-Element Analysis of Penetration Mechanics; Crashworthiness Engineering; Computer Modeling of Metal-Forming Processes; Thermal-Mechanical Modeling of Tungsten Arc Welding; Modeling Air Breakdown Induced by Electromagnetic Fields; Iterative Techniques for Solving Boltzmann's Equations for p-Type Semiconductors; Semiconductor Modeling; and Improved Numerical-Solution Techniques in Large-Scale Stress Analysis.
Cole-Cole, linear and multivariate modeling of capacitance data for on-line monitoring of biomass.
Dabros, Michal; Dennewald, Danielle; Currie, David J; Lee, Mark H; Todd, Robert W; Marison, Ian W; von Stockar, Urs
2009-02-01
This work evaluates three techniques of calibrating capacitance (dielectric) spectrometers used for on-line monitoring of biomass: modeling of cell properties using the theoretical Cole-Cole equation, linear regression of dual-frequency capacitance measurements on biomass concentration, and multivariate (PLS) modeling of scanning dielectric spectra. The performance and robustness of each technique is assessed during a sequence of validation batches in two experimental settings of differing signal noise. In more noisy conditions, the Cole-Cole model had significantly higher biomass concentration prediction errors than the linear and multivariate models. The PLS model was the most robust in handling signal noise. In less noisy conditions, the three models performed similarly. Estimates of the mean cell size were done additionally using the Cole-Cole and PLS models, the latter technique giving more satisfactory results.
A Simplified Technique for Scoring DSM-IV Personality Disorders with the Five-Factor Model
ERIC Educational Resources Information Center
Miller, Joshua D.; Bagby, R. Michael; Pilkonis, Paul A.; Reynolds, Sarah K.; Lynam, Donald R.
2005-01-01
The current study compares the use of two alternative methodologies for using the Five-Factor Model (FFM) to assess personality disorders (PDs). Across two clinical samples, a technique using the simple sum of selected FFM facets is compared with a previously used prototype matching technique. The results demonstrate that the more easily…
An Example of a Hakomi Technique Adapted for Functional Analytic Psychotherapy
ERIC Educational Resources Information Center
Collis, Peter
2012-01-01
Functional Analytic Psychotherapy (FAP) is a model of therapy that lends itself to integration with other therapy models. This paper aims to provide an example to assist others in assimilating techniques from other forms of therapy into FAP. A technique from the Hakomi Method is outlined and modified for FAP. As, on the whole, psychotherapy…
Using object-oriented analysis techniques to support system testing
NASA Astrophysics Data System (ADS)
Zucconi, Lin
1990-03-01
Testing of real-time control systems can be greatly facilitated by use of object-oriented and structured analysis modeling techniques. This report describes a project where behavior, process and information models built for a real-time control system were used to augment and aid traditional system testing. The modeling techniques used were an adaptation of the Ward/Mellor method for real-time systems analysis and design (Ward85) for object-oriented development. The models were used to simulate system behavior by means of hand execution of the behavior or state model and the associated process (data and control flow) and information (data) models. The information model, which uses an extended entity-relationship modeling technique, is used to identify application domain objects and their attributes (instance variables). The behavioral model uses state-transition diagrams to describe the state-dependent behavior of the object. The process model uses a transformation schema to describe the operations performed on or by the object. Together, these models provide a means of analyzing and specifying a system in terms of the static and dynamic properties of the objects which it manipulates. The various models were used to simultaneously capture knowledge about both the objects in the application domain and the system implementation. Models were constructed, verified against the software as-built and validated through informal reviews with the developer. These models were then hand-executed.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J
2016-01-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888
NASA Technical Reports Server (NTRS)
Takacs, Lawrence L.; Sawyer, William; Suarez, Max J. (Editor); Fox-Rabinowitz, Michael S.
1999-01-01
This report documents the techniques used to filter quantities on a stretched grid general circulation model. Standard high-latitude filtering techniques (e.g., using an FFT (Fast Fourier Transformations) to decompose and filter unstable harmonics at selected latitudes) applied on a stretched grid are shown to produce significant distortions of the prognostic state when used to control instabilities near the pole. A new filtering technique is developed which accurately accounts for the non-uniform grid by computing the eigenvectors and eigenfrequencies associated with the stretching. A filter function, constructed to selectively damp those modes whose associated eigenfrequencies exceed some critical value, is used to construct a set of grid-spaced weights which are shown to effectively filter without distortion. Both offline and GCM (General Circulation Model) experiments are shown using the new filtering technique. Finally, a brief examination is also made on the impact of applying the Shapiro filter on the stretched grid.
Meta-Analytic Structural Equation Modeling (MASEM): Comparison of the Multivariate Methods
ERIC Educational Resources Information Center
Zhang, Ying
2011-01-01
Meta-analytic Structural Equation Modeling (MASEM) has drawn interest from many researchers recently. In doing MASEM, researchers usually first synthesize correlation matrices across studies using meta-analysis techniques and then analyze the pooled correlation matrix using structural equation modeling techniques. Several multivariate methods of…
Machine learning models in breast cancer survival prediction.
Montazeri, Mitra; Montazeri, Mohadeseh; Montazeri, Mahdieh; Beigzadeh, Amin
2016-01-01
Breast cancer is one of the most common cancers with a high mortality rate among women. With the early diagnosis of breast cancer survival will increase from 56% to more than 86%. Therefore, an accurate and reliable system is necessary for the early diagnosis of this cancer. The proposed model is the combination of rules and different machine learning techniques. Machine learning models can help physicians to reduce the number of false decisions. They try to exploit patterns and relationships among a large number of cases and predict the outcome of a disease using historical cases stored in datasets. The objective of this study is to propose a rule-based classification method with machine learning techniques for the prediction of different types of Breast cancer survival. We use a dataset with eight attributes that include the records of 900 patients in which 876 patients (97.3%) and 24 (2.7%) patients were females and males respectively. Naive Bayes (NB), Trees Random Forest (TRF), 1-Nearest Neighbor (1NN), AdaBoost (AD), Support Vector Machine (SVM), RBF Network (RBFN), and Multilayer Perceptron (MLP) machine learning techniques with 10-cross fold technique were used with the proposed model for the prediction of breast cancer survival. The performance of machine learning techniques were evaluated with accuracy, precision, sensitivity, specificity, and area under ROC curve. Out of 900 patients, 803 patients and 97 patients were alive and dead, respectively. In this study, Trees Random Forest (TRF) technique showed better results in comparison to other techniques (NB, 1NN, AD, SVM and RBFN, MLP). The accuracy, sensitivity and the area under ROC curve of TRF are 96%, 96%, 93%, respectively. However, 1NN machine learning technique provided poor performance (accuracy 91%, sensitivity 91% and area under ROC curve 78%). This study demonstrates that Trees Random Forest model (TRF) which is a rule-based classification model was the best model with the highest level of accuracy. Therefore, this model is recommended as a useful tool for breast cancer survival prediction as well as medical decision making.
A technique using a nonlinear helicopter model for determining trims and derivatives
NASA Technical Reports Server (NTRS)
Ostroff, A. J.; Downing, D. R.; Rood, W. J.
1976-01-01
A technique is described for determining the trims and quasi-static derivatives of a flight vehicle for use in a linear perturbation model; both the coupled and uncoupled forms of the linear perturbation model are included. Since this technique requires a nonlinear vehicle model, detailed equations with constants and nonlinear functions for the CH-47B tandem rotor helicopter are presented. Tables of trims and derivatives are included for airspeeds between -40 and 160 knots and rates of descent between + or - 10.16 m/sec (+ or - 200 ft/min). As a verification, the calculated and referenced values of comparable trims, derivatives, and linear model poles are shown to have acceptable agreement.
2014-01-01
Background Lateral mass screw fixation (LSF) techniques have been widely used for reconstructing and stabilizing the cervical spine; however, complications may result depending on the choice of surgeon. There are only a few reports related to LSF applications, even though fracture fixation has become a severe complication. This study establishes the three-dimensional finite element model of the lower cervical spine, and compares the stress distribution of the four LSF techniques (Magerl, Roy-Camille, Anderson, and An), following laminectomy -- to explore the risks of rupture after fixation. Method CT scans were performed on a healthy adult female volunteer, and Digital imaging and communication in medicine (Dicom) data was obtained. Mimics 10.01, Geomagic Studio 12.0, Solidworks 2012, HyperMesh 10.1 and Abaqus 6.12 software programs were used to establish the intact model of the lower cervical spines (C3-C7), a postoperative model after laminectomy, and a reconstructive model after applying the LSF techniques. A compressive preload of 74 N combined with a pure moment of 1.8 Nm was applied to the intact and reconstructive model, simulating normal flexion, extension, lateral bending, and axial rotation. The stress distribution of the four LSF techniques was compared by analyzing the maximum von Mises stress. Result The three-dimensional finite element model of the intact C3-C7 vertebrae was successfully established. This model consists of 503,911 elements and 93,390 nodes. During flexion, extension, lateral bending, and axial rotation modes, the intact model’s angular intersegmental range of motion was in good agreement with the results reported from the literature. The postoperative model after the three-segment laminectomy and the reconstructive model after applying the four LSF techniques were established based on the validated intact model. The stress distribution for the Magerl and Roy-Camille groups were more dispersive, and the maximum von Mises stress levels were lower than the other two groups in various conditions. Conclusion The LSF techniques of Magerl and Roy-Camille are safer methods for stabilizing the lower cervical spine. Therefore, these methods potentially have a lower risk of fixation fracture. PMID:25106498
Comparison of Three Optical Methods for Measuring Model Deformation
NASA Technical Reports Server (NTRS)
Burner, A. W.; Fleming, G. A.; Hoppe, J. C.
2000-01-01
The objective of this paper is to compare the current state-of-the-art of the following three optical techniques under study by NASA for measuring model deformation in wind tunnels: (1) video photogrammetry, (2) projection moire interferometry, and (3) the commercially available Optotrak system. An objective comparison of these three techniques should enable the selection of the best technique for a particular test undertaken at various NASA facilities. As might be expected, no one technique is best for all applications. The techniques are also not necessarily mutually exclusive and in some cases can be complementary to one another.
Manoj, Smita Sara; Cherian, K P; Chitre, Vidya; Aras, Meena
2013-12-01
There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregum Soft™, 3 M ESPE and the four impression techniques used were (1) Monophase impression technique using medium body impression material. (2) One step double mix impression technique using heavy body and light body impression materials simultaneously. (3) Two step double mix impression technique using a cellophane spacer (heavy body material used as a preliminary impression to create a wash space with a cellophane spacer, followed by the use of light body material). (4) Matrix impression using a matrix of polyether occlusal registration material. The matrix is loaded with heavy body material followed by a pick-up impression in medium body material. For each technique, thirty impressions were made of a stainless steel master model that contained three complete crown abutment preparations, which were used as the positive control. Accuracy was assessed by measuring eight dimensions (mesiodistal, faciolingual and inter-abutment) on stone dies poured from impressions of the master model. A two-tailed t test was carried out to test the significance in difference of the distances between the master model and the stone models. One way analysis of variance (ANOVA) was used for multiple group comparison followed by the Bonferroni's test for pair wise comparison. The accuracy was tested at α = 0.05. In general, polyether impression material produced stone dies that were smaller except for the dies produced from the one step double mix impression technique. The ANOVA revealed a highly significant difference for each dimension measured (except for the inter-abutment distance between the first and the second die) between any two groups of stone models obtained from the four impression techniques. Pair wise comparison for each measurement did not reveal any significant difference (except for the faciolingual distance of the third die) between the casts produced using the two step double mix impression technique and the matrix impression system. The two step double mix impression technique produced stone dies that showed the least dimensional variation. During fabrication of a cast restoration, laboratory procedures should not only compensate for the cement thickness, but also for the increase or decrease in die dimensions.
NASA Technical Reports Server (NTRS)
White, Allan L.; Palumbo, Daniel L.
1991-01-01
Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.
Three-dimensional accuracy of a digitally coded healing abutment implant impression system.
Ng, Simon D; Tan, Keson B; Teoh, K H; Cheng, Ansgar C; Nicholls, Jack I
2014-01-01
This study examined the three-dimensional (3D) accuracy of the Encode Impression System (EN) in transferring the locations of two implants from master models to test models and compared this to the direct impression (DI) technique. The effect of interimplant angulation on the 3D accuracy of both impression techniques was also evaluated. Seven sectional polymethyl methacrylate mandibular arch master models were fabricated with implants in the first premolar and first molar positions. The implants were placed parallel to each other or angulated mesiodistally or buccolingually with total divergent angles of 10, 20, or 30 degrees. Each master model was secured onto an aluminum block containing a gauge block, which defined the local coordinate references. Encode healing abutments were attached to the implants before impressions were made for the EN test models; pickup impression copings were attached for the DI test models. For the seven test groups of each impression technique, a total of 70 test models were fabricated (n = 5). The EN test models were sent to Biomet 3i for implant analog placement. The centroid of each implant or implant analog and the angular orientation of the long axis relative to the x- and y-axes were measured with a coordinate measuring machine. Statistical analyses were performed. Impression technique had a significant effect on y distortion, global linear distortion, and absolute xz and yz angular distortions. Interimplant angulation had significant effects on x and y distortions. However, neither impression technique nor interimplant angulation had a significant effect on z distortion. Distortions were observed with both impression techniques. However, the results suggest that EN was less accurate than DI.
Prostate Cancer Probability Prediction By Machine Learning Technique.
Jović, Srđan; Miljković, Milica; Ivanović, Miljan; Šaranović, Milena; Arsić, Milena
2017-11-26
The main goal of the study was to explore possibility of prostate cancer prediction by machine learning techniques. In order to improve the survival probability of the prostate cancer patients it is essential to make suitable prediction models of the prostate cancer. If one make relevant prediction of the prostate cancer it is easy to create suitable treatment based on the prediction results. Machine learning techniques are the most common techniques for the creation of the predictive models. Therefore in this study several machine techniques were applied and compared. The obtained results were analyzed and discussed. It was concluded that the machine learning techniques could be used for the relevant prediction of prostate cancer.
Aortic Root Biomechanics After Sleeve and David Sparing Techniques: A Finite Element Analysis.
Tasca, Giordano; Selmi, Matteo; Votta, Emiliano; Redaelli, Paola; Sturla, Francesco; Redaelli, Alberto; Gamba, Amando
2017-05-01
Aortic root aneurysm can be treated with valve-sparing procedures. The David and Yacoub techniques have shown excellent long-term results but are technically demanding. Recently, a new and simpler procedure, the Sleeve technique, was proposed with encouraging results. We aimed to quantify the biomechanics of the initially aneurysmal aortic root (AR) after the Sleeve procedure to assess whether it induces abnormal stresses, potentially undermining its durability. Two finite element (FE) models of the physiologic and aneurysmal AR were built, accounting for the anatomical asymmetry and the nonlinear and anisotropic mechanical properties of human AR tissues. On the aneurysmal model, the Sleeve and David techniques were simulated based on the corresponding published technical features. Aortic root biomechanics throughout 2 consecutive cardiac cycles were computed in each simulated configuration. Both sparing techniques restored physiologic-like kinematics of aortic valve (AV) leaflets but induced different leaflets stresses. The time course averaged over the leaflets' bellies was 35% higher in the David model than in the Sleeve model. Commissural stresses, which were equal to 153 and 318 kPa in the physiologic and aneurysmal models, respectively, became 369 and 208 kPa in the David and Sleeve models, respectively. No intrinsic structural problems were detected in the Sleeve model that might jeopardize the durability of the procedure. If corroborated by long-term clinical outcomes, the results obtained suggest that using this new technique could successfully simplify the surgical repair of AR aneurysms and reduce intraoperative complications. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Kim, Hyong Nyun; Liu, Xiao Ning; Noh, Kyu Cheol
2015-06-10
Open reduction and plate fixation is the standard operative treatment for displaced midshaft clavicle fracture. However, sometimes it is difficult to achieve anatomic reduction by open reduction technique in cases with comminution. We describe a novel technique using a real-size three dimensionally (3D)-printed clavicle model as a preoperative and intraoperative tool for minimally invasive plating of displaced comminuted midshaft clavicle fractures. A computed tomography (CT) scan is taken of both clavicles in patients with a unilateral displaced comminuted midshaft clavicle fracture. Both clavicles are 3D printed into a real-size clavicle model. Using the mirror imaging technique, the uninjured side clavicle is 3D printed into the opposite side model to produce a suitable replica of the fractured side clavicle pre-injury. The 3D-printed fractured clavicle model allows the surgeon to observe and manipulate accurate anatomical replicas of the fractured bone to assist in fracture reduction prior to surgery. The 3D-printed uninjured clavicle model can be utilized as a template to select the anatomically precontoured locking plate which best fits the model. The plate can be inserted through a small incision and fixed with locking screws without exposing the fracture site. Seven comminuted clavicle fractures treated with this technique achieved good bone union. This technique can be used for a unilateral displaced comminuted midshaft clavicle fracture when it is difficult to achieve anatomic reduction by open reduction technique. Level of evidence V.
Simulation Modelling in Healthcare: An Umbrella Review of Systematic Literature Reviews.
Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Booth, Andrew
2017-09-01
Numerous studies examine simulation modelling in healthcare. These studies present a bewildering array of simulation techniques and applications, making it challenging to characterise the literature. The aim of this paper is to provide an overview of the level of activity of simulation modelling in healthcare and the key themes. We performed an umbrella review of systematic literature reviews of simulation modelling in healthcare. Searches were conducted of academic databases (JSTOR, Scopus, PubMed, IEEE, SAGE, ACM, Wiley Online Library, ScienceDirect) and grey literature sources, enhanced by citation searches. The articles were included if they performed a systematic review of simulation modelling techniques in healthcare. After quality assessment of all included articles, data were extracted on numbers of studies included in each review, types of applications, techniques used for simulation modelling, data sources and simulation software. The search strategy yielded a total of 117 potential articles. Following sifting, 37 heterogeneous reviews were included. Most reviews achieved moderate quality rating on a modified AMSTAR (A Measurement Tool used to Assess systematic Reviews) checklist. All the review articles described the types of applications used for simulation modelling; 15 reviews described techniques used for simulation modelling; three reviews described data sources used for simulation modelling; and six reviews described software used for simulation modelling. The remaining reviews either did not report or did not provide enough detail for the data to be extracted. Simulation modelling techniques have been used for a wide range of applications in healthcare, with a variety of software tools and data sources. The number of reviews published in recent years suggest an increased interest in simulation modelling in healthcare.
NASA Technical Reports Server (NTRS)
Evers, Ken H.; Bachert, Robert F.
1987-01-01
The IDEAL (Integrated Design and Engineering Analysis Languages) modeling methodology has been formulated and applied over a five-year period. It has proven to be a unique, integrated approach utilizing a top-down, structured technique to define and document the system of interest; a knowledge engineering technique to collect and organize system descriptive information; a rapid prototyping technique to perform preliminary system performance analysis; and a sophisticated simulation technique to perform in-depth system performance analysis.
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1981-01-01
Progress is reported in reading MAGSAT tapes in modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere. The modeling technique utilizes a linear current element representation of the large-scale space-current system.
Modelling and Simulation for Requirements Engineering and Options Analysis
2010-05-01
should be performed to work successfully in the domain; and process-based techniques model the processes that occur in the work domain. There is a crisp ...acad/sed/sedres/ dm /erg/cwa. DRDC Toronto CR 2010-049 39 23. Can the current technique for developing simulation models for assessments
Spectral Analysis and Experimental Modeling of Ice Accretion Roughness
NASA Technical Reports Server (NTRS)
Orr, D. J.; Breuer, K. S.; Torres, B. E.; Hansman, R. J., Jr.
1996-01-01
A self-consistent scheme for relating wind tunnel ice accretion roughness to the resulting enhancement of heat transfer is described. First, a spectral technique of quantitative analysis of early ice roughness images is reviewed. The image processing scheme uses a spectral estimation technique (SET) which extracts physically descriptive parameters by comparing scan lines from the experimentally-obtained accretion images to a prescribed test function. Analysis using this technique for both streamwise and spanwise directions of data from the NASA Lewis Icing Research Tunnel (IRT) are presented. An experimental technique is then presented for constructing physical roughness models suitable for wind tunnel testing that match the SET parameters extracted from the IRT images. The icing castings and modeled roughness are tested for enhancement of boundary layer heat transfer using infrared techniques in a "dry" wind tunnel.
Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.
1988-01-01
A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.
ERIC Educational Resources Information Center
Muchlas
2015-01-01
This research is aimed to produce a teaching model and its supporting instruments using a collaboration approach for a digital technique practical work attended by higher education students. The model is found to be flexible and relatively low cost. Through this research, feasibility and learning impact of the model will be determined. The model…
Session on techniques and resources for storm-scale numerical weather prediction
NASA Technical Reports Server (NTRS)
Droegemeier, Kelvin
1993-01-01
The session on techniques and resources for storm-scale numerical weather prediction are reviewed. The recommendations of this group are broken down into three area: modeling and prediction, data requirements in support of modeling and prediction, and data management. The current status, modeling and technological recommendations, data requirements in support of modeling and prediction, and data management are addressed.
An Application of Conley Index Techniques to a Model of Bursting in Excitable Membranes
NASA Astrophysics Data System (ADS)
Kinney, William M.
2000-04-01
Assumptions about a model of bursting activity in pancreatic β-cells are stated and a neighborhood of the attractor in this model is constructed. Conley index results and techniques are used to give a sufficient condition for a singular isolating neighborhood to isolate a nonempty attractor. Finally, this theorem is applied to the bursting model.
Low level waste management: a compilation of models and monitoring techniques. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosier, J.E.; Fowler, J.R.; Barton, C.J.
1980-04-01
In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surfacemore » water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.« less
Laurence Lin; J.R. Webster
2012-01-01
The constant nutrient addition technique has been used extensively to measure nutrient uptake in streams. However, this technique is impractical for large streams, and the pulse nutrient addition (PNA) has been suggested as an alternative. We developed a computer model to simulate Monod kinetics nutrient uptake in large rivers and used this model to evaluate the...
Cathy A. Taylor; C. John Ralph; Arlene T. Doyle
1988-01-01
Three trapping techniques for small mammals were used in 47 study stands in northern California and southern Oregon and resulted in different capture frequencies by the different techniques. In addition, the abundances of mammals derived from the different techniques produced vegetation association models which were often quite different. Only the California redbacked...
The application of a unique flow modeling technique to complex combustion systems
NASA Astrophysics Data System (ADS)
Waslo, J.; Hasegawa, T.; Hilt, M. B.
1986-06-01
This paper describes the application of a unique three-dimensional water flow modeling technique to the study of complex fluid flow patterns within an advanced gas turbine combustor. The visualization technique uses light scattering, coupled with real-time image processing, to determine flow fields. Additional image processing is used to make concentration measurements within the combustor.
NASA Astrophysics Data System (ADS)
Shoukry, Samir N.; William, Gergis W.; Riad, Mourad Y.; McBride, Kevyn C.
2006-08-01
Dynamic relaxation is a technique developed to solve static problems through an explicit integration in finite element. The main advantage of such a technique is the ability to solve a large problem in a relatively short time compared with the traditional implicit techniques, especially when using nonlinear material models. This paper describes the use of such a technique in analyzing large transportation structures as dowel jointed concrete pavements and 306-m-long, reinforced concrete bridge superstructure under the effect of temperature variations. The main feature of the pavement model is the detailed modeling of dowel bars and their interfaces with the surrounding concrete using extremely fine mesh of solid elements, while in the bridge structure it is the detailed modeling of the girder-deck interface as well as the bracing members between the girders. The 3DFE results were found to be in a good agreement with experimentally measured data obtained from an instrumented pavements sections and a highway bridge constructed in West Virginia. Thus, such a technique provides a good tool for analyzing the response of large structures to static loads in a fraction of the time required by traditional, implicit finite element methods.
A three-dimensional muscle activity imaging technique for assessing pelvic muscle function
NASA Astrophysics Data System (ADS)
Zhang, Yingchun; Wang, Dan; Timm, Gerald W.
2010-11-01
A novel multi-channel surface electromyography (EMG)-based three-dimensional muscle activity imaging (MAI) technique has been developed by combining the bioelectrical source reconstruction approach and subject-specific finite element modeling approach. Internal muscle activities are modeled by a current density distribution and estimated from the intra-vaginal surface EMG signals with the aid of a weighted minimum norm estimation algorithm. The MAI technique was employed to minimally invasively reconstruct electrical activity in the pelvic floor muscles and urethral sphincter from multi-channel intra-vaginal surface EMG recordings. A series of computer simulations were conducted to evaluate the performance of the present MAI technique. With appropriate numerical modeling and inverse estimation techniques, we have demonstrated the capability of the MAI technique to accurately reconstruct internal muscle activities from surface EMG recordings. This MAI technique combined with traditional EMG signal analysis techniques is being used to study etiologic factors associated with stress urinary incontinence in women by correlating functional status of muscles characterized from the intra-vaginal surface EMG measurements with the specific pelvic muscle groups that generated these signals. The developed MAI technique described herein holds promise for eliminating the need to place needle electrodes into muscles to obtain accurate EMG recordings in some clinical applications.
Reduced-order modeling for hyperthermia: an extended balanced-realization-based approach.
Mattingly, M; Bailey, E A; Dutton, A W; Roemer, R B; Devasia, S
1998-09-01
Accurate thermal models are needed in hyperthermia cancer treatments for such tasks as actuator and sensor placement design, parameter estimation, and feedback temperature control. The complexity of the human body produces full-order models which are too large for effective execution of these tasks, making use of reduced-order models necessary. However, standard balanced-realization (SBR)-based model reduction techniques require a priori knowledge of the particular placement of actuators and sensors for model reduction. Since placement design is intractable (computationally) on the full-order models, SBR techniques must use ad hoc placements. To alleviate this problem, an extended balanced-realization (EBR)-based model-order reduction approach is presented. The new technique allows model order reduction to be performed over all possible placement designs and does not require ad hoc placement designs. It is shown that models obtained using the EBR method are more robust to intratreatment changes in the placement of the applied power field than those models obtained using the SBR method.
Managerial Techniques in Educational Administration.
ERIC Educational Resources Information Center
Lane, John J.
1983-01-01
Management techniques developed during the past 20 years assume the rational bureaucratic model. School administration requires contingent techniques. Quality Circle, Theory Z, and the McKenzie 7-Framework are discussed as techniques to increase school productivity. (MD)
Order reduction for a model of marine bacteriophage evolution
NASA Astrophysics Data System (ADS)
Pagliarini, Silvia; Korobeinikov, Andrei
2017-02-01
A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.
Extending enterprise architecture modelling with business goals and requirements
NASA Astrophysics Data System (ADS)
Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten
2011-02-01
The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.
NASA Astrophysics Data System (ADS)
Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal
2014-06-01
This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.
NASA Astrophysics Data System (ADS)
Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.
2015-11-01
Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.
Video: two novel endoscopic esophageal lengthening and reconstruction techniques.
Perretta, Silvana; Wall, James K; Dallemagne, Bernard; Harrison, Michael; Becmeur, François; Marescaux, Jacques
2011-10-01
Esophageal reconstruction presents a significant clinical challenge in patients ranging from neonates with long-gap esophageal atresia to adults after esophageal resection. Both gastric and colonic replacement conduits carry significant morbidity. As emerging organ-sparring techniques become established for early stage esophageal tumors, less morbid reconstruction techniques are warranted. We present two novel endoscopic approaches for esophageal lengthening and reconstruction in a porcine model. Two models of esophageal defects were created in pigs (30-35 kg) under general anesthesia and subsequently reconstructed with the novel techniques. The first model was a segmental defect of the esophagus created by thoracoscopically transecting the esophagus above the gastroesophageal (GE) junction. The first reconstruction technique involved bilateral submucosal endoscopic lengthening myotomies (BSELM) with a magnetic compression anastomosis (MAGNAMOSIS™). The second model was a wedge defect in the anterior esophagus created above the GE junction through a laparotomy. The second reconstruction technique involved an inverted mucosal-submucosal sleeve transposition graft (IMSTG) that crossed the esophageal gap and was secured in place with a self-expandable covered esophageal stent. Both techniques were feasible in the pig model. The BSELM approach lengthened the esophagus 1 cm for every 2 cm length of myotomy. The myotomy targeted only the inner circular fibers of the esophagus, with preservation of the longitudinal layer to protect against long-term dilation and pouching. The IMSTG approach generated a vascularized mucosal graft almost as long as the esophagus itself. Emerging endoscopic capabilities are enabling complex endoluminal esophageal procedures. BSELM and IMSTG are two novel and technically feasible approaches to esophageal lengthening and reconstruction. Further survival studies are needed to establish the safety and efficacy of these techniques.
Key technique study and application of infrared thermography in hypersonic wind tunnel
NASA Astrophysics Data System (ADS)
LI, Ming; Yang, Yan-guang; Li, Zhi-hui; Zhu, Zhi-wei; Zhou, Jia-sui
2014-11-01
The solutions to some key techniques using infrared thermographic technique in hypersonic wind tunnel, such as temperature measurement under great measurement angle, the corresponding relation between model spatial coordinates and the ones in infrared map, the measurement uncertainty analysis of the test data etc., are studied. The typical results in the hypersonic wind tunnel test are presented, including the comparison of the transfer rates on a thin skin flat plate model with a wedge measured with infrared thermography and thermocouple, the experimental study heating effect on the flat plate model impinged by plume flow and the aerodynamic heating on the lift model.
Terrain modeling for microwave landing system
NASA Technical Reports Server (NTRS)
Poulose, M. M.
1991-01-01
A powerful analytical approach for evaluating the terrain effects on a microwave landing system (MLS) is presented. The approach combines a multiplate model with a powerful and exhaustive ray tracing technique and an accurate formulation for estimating the electromagnetic fields due to the antenna array in the presence of terrain. Both uniform theory of diffraction (UTD) and impedance UTD techniques have been employed to evaluate these fields. Innovative techniques are introduced at each stage to make the model versatile to handle most general terrain contours and also to reduce the computational requirement to a minimum. The model is applied to several terrain geometries, and the results are discussed.
Using Decision Trees for Estimating Mode Choice of Trips in Buca-Izmir
NASA Astrophysics Data System (ADS)
Oral, L. O.; Tecim, V.
2013-05-01
Decision makers develop transportation plans and models for providing sustainable transport systems in urban areas. Mode Choice is one of the stages in transportation modelling. Data mining techniques can discover factors affecting the mode choice. These techniques can be applied with knowledge process approach. In this study a data mining process model is applied to determine the factors affecting the mode choice with decision trees techniques by considering individual trip behaviours from household survey data collected within Izmir Transportation Master Plan. From this perspective transport mode choice problem is solved on a case in district of Buca-Izmir, Turkey with CRISP-DM knowledge process model.
Modeling of switching regulator power stages with and without zero-inductor-current dwell time
NASA Technical Reports Server (NTRS)
Lee, F. C. Y.; Yu, Y.
1979-01-01
State-space techniques are employed to derive accurate models for the three basic switching converter power stages: buck, boost, and buck/boost operating with and without zero-inductor-current dwell time. A generalized procedure is developed which treats the continuous-inductor-current mode without dwell time as a special case of the discontinuous-current mode when the dwell time vanishes. Abrupt changes of system behavior, including a reduction of the system order when the dwell time appears, are shown both analytically and experimentally. Merits resulting from the present modeling technique in comparison with existing modeling techniques are illustrated.
An animal model for instructing and the study of in situ arterial bypass.
Saifi, J; Chang, B B; Paty, P S; Kaufman, J; Leather, R P; Shah, D M
1990-11-01
A canine model that used the cephalic vein to bypass from the brachial to the ulnar artery was designed for use in instructing and evaluating surgical technique needed for constructing an in situ arterial bypass. This model was used for instructing vascular residents in the in situ vein bypass technique. The use of this model enabled the resident to become more adept with the instruments for valve incision and construction of small vessel anastomosis. The improvement in the resident's operative technique was reflected by a decrease in the number of technical complications (missed valves, missed arteriovenous fistulas, poorly constructed anastomoses) and improved patency rate.
Binder, Harald; Porzelius, Christine; Schumacher, Martin
2011-03-01
Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Marini, C; Fossa, F; Paoli, C; Bellingeri, M; Gnone, G; Vassallo, P
2015-03-01
Habitat modeling is an important tool to investigate the quality of the habitat for a species within a certain area, to predict species distribution and to understand the ecological processes behind it. Many species have been investigated by means of habitat modeling techniques mainly to address effective management and protection policies and cetaceans play an important role in this context. The bottlenose dolphin (Tursiops truncatus) has been investigated with habitat modeling techniques since 1997. The objectives of this work were to predict the distribution of bottlenose dolphin in a coastal area through the use of static morphological features and to compare the prediction performances of three different modeling techniques: Generalized Linear Model (GLM), Generalized Additive Model (GAM) and Random Forest (RF). Four static variables were tested: depth, bottom slope, distance from 100 m bathymetric contour and distance from coast. RF revealed itself both the most accurate and the most precise modeling technique with very high distribution probabilities predicted in presence cells (90.4% of mean predicted probabilities) and with 66.7% of presence cells with a predicted probability comprised between 90% and 100%. The bottlenose distribution obtained with RF allowed the identification of specific areas with particularly high presence probability along the coastal zone; the recognition of these core areas may be the starting point to develop effective management practices to improve T. truncatus protection. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mücke, Thomas; Ritschl, Lucas M; Balasso, Andrea; Wolff, Klaus-Dietrich; Mitchell, David A; Liepsch, Dieter
2014-01-01
The end-to-side anastomosis is frequently used in microvascular free flap transfer, but detailed rheological analyses are not available. The purpose of this study was to introduce a new modified end-to-side (Opened End-to-Side, OES-) technique and compare the resulting flow pattern to a conventional technique. The new technique was based on a bi-triangulated preparation of the branching-vessel end, resulting in a "fish-mouthed" opening. We performed two different types of end-to-side anastomoses in forty pig coronary arteries and produced one elastic, true-to-scale silicone rubber model of each anastomosis. Then we installed the transparent models in a circulatory experimental setup that simulated the physiological human blood flow. Flow velocity was measured with the one-component Laser-Doppler-Anemometer system, recording flow axial and perpendicular to the model at four defined cross-sections for seven heart cycles in each model. Maximal and minimal axial velocities ranged in the conventional model between 0.269 and -0.122 m/s and in the experimental model between 0.313 and -0.153 m/s. A less disturbed flow velocity distribution was seen in the experimental model distal to the anastomosis. The OES-technique showed superior flow profiles distal to the anastomosis with minor tendencies of flow separation and represents a new alternative for end-to-side anastomosis. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Martins, J. M. P.; Thuillier, S.; Andrade-Campos, A.
2018-05-01
The identification of material parameters, for a given constitutive model, can be seen as the first step before any practical application. In the last years, the field of material parameters identification received an important boost with the development of full-field measurement techniques, such as Digital Image Correlation. These techniques enable the use of heterogeneous displacement/strain fields, which contain more information than the classical homogeneous tests. Consequently, different techniques have been developed to extract material parameters from full-field measurements. In this study, two of these techniques are addressed, the Finite Element Model Updating (FEMU) and the Virtual Fields Method (VFM). The main idea behind FEMU is to update the parameters of a constitutive model implemented in a finite element model until both numerical and experimental results match, whereas VFM makes use of the Principle of Virtual Work and does not require any finite element simulation. Though both techniques proved their feasibility in linear and non-linear constitutive models, it is rather difficult to rank their robustness in plasticity. The purpose of this work is to perform a comparative study in the case of elasto-plastic models. Details concerning the implementation of each strategy are presented. Moreover, a dedicated code for VFM within a large strain framework is developed. The reconstruction of the stress field is performed through a user subroutine. A heterogeneous tensile test is considered to compare FEMU and VFM strategies.
Protein Modelling: What Happened to the “Protein Structure Gap”?
Schwede, Torsten
2013-01-01
Computational modeling and prediction of three-dimensional macromolecular structures and complexes from their sequence has been a long standing vision in structural biology as it holds the promise to bypass part of the laborious process of experimental structure solution. Over the last two decades, a paradigm shift has occurred: starting from a situation where the “structure knowledge gap” between the huge number of protein sequences and small number of known structures has hampered the widespread use of structure-based approaches in life science research, today some form of structural information – either experimental or computational – is available for the majority of amino acids encoded by common model organism genomes. Template based homology modeling techniques have matured to a point where they are now routinely used to complement experimental techniques. With the scientific focus of interest moving towards larger macromolecular complexes and dynamic networks of interactions, the integration of computational modeling methods with low-resolution experimental techniques allows studying large and complex molecular machines. Computational modeling and prediction techniques are still facing a number of challenges which hamper the more widespread use by the non-expert scientist. For example, it is often difficult to convey the underlying assumptions of a computational technique, as well as the expected accuracy and structural variability of a specific model. However, these aspects are crucial to understand the limitations of a model, and to decide which interpretations and conclusions can be supported. PMID:24010712
Modular techniques for dynamic fault-tree analysis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Dugan, Joanne B.
1992-01-01
It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.
Modeling paradigms for medical diagnostic decision support: a survey and future directions.
Wagholikar, Kavishwar B; Sundararajan, Vijayraghavan; Deshpande, Ashok W
2012-10-01
Use of computer based decision tools to aid clinical decision making, has been a primary goal of research in biomedical informatics. Research in the last five decades has led to the development of Medical Decision Support (MDS) applications using a variety of modeling techniques, for a diverse range of medical decision problems. This paper surveys literature on modeling techniques for diagnostic decision support, with a focus on decision accuracy. Trends and shortcomings of research in this area are discussed and future directions are provided. The authors suggest that-(i) Improvement in the accuracy of MDS application may be possible by modeling of vague and temporal data, research on inference algorithms, integration of patient information from diverse sources and improvement in gene profiling algorithms; (ii) MDS research would be facilitated by public release of de-identified medical datasets, and development of opensource data-mining tool kits; (iii) Comparative evaluations of different modeling techniques are required to understand characteristics of the techniques, which can guide developers in choice of technique for a particular medical decision problem; and (iv) Evaluations of MDS applications in clinical setting are necessary to foster physicians' utilization of these decision aids.
2014-01-01
Background Inter-professional learning has been promoted as the solution to many clinical management issues. One such issue is the correct use of asthma inhaler devices. Up to 80% of people with asthma use their inhaler device incorrectly. The implications of this are poor asthma control and quality of life. Correct inhaler technique can be taught, however these educational instructions need to be repeated if correct technique is to be maintained. It is important to maximise the opportunities to deliver this education in primary care. In light of this, it is important to explore how health care providers, in particular pharmacists and general medical practitioners, can work together in delivering inhaler technique education to patients, over time. Therefore, there is a need to develop and evaluate effective inter-professional education, which will address the need to educate patients in the correct use of their inhalers as well as equip health care professionals with skills to engage in collaborative relationships with each other. Methods This mixed methods study involves the development and evaluation of three modules of continuing education, Model 1, Model 2 and Model 3. A fourth group, Model 4, acting as a control. Model 1 consists of face-to-face continuing professional education on asthma inhaler technique, aimed at pharmacists, general medical practitioners and their practice nurses. Model 2 is an electronic online continuing education module based on Model 1 principles. Model 3 is also based on asthma inhaler technique education but employs a learning intervention targeting health care professional relationships and is based on sociocultural theory. This study took the form of a parallel group, repeated measure design. Following the completion of continuing professional education, health care professionals recruited people with asthma and followed them up for 6 months. During this period, inhaler device technique training was delivered and data on patient inhaler technique, clinical and humanistic outcomes were collected. Outcomes related to professional collaborative relationships were also measured. Discussion Challenges presented included the requirement of significant financial resources for development of study materials and limited availability of validated tools to measure health care professional collaboration over time. PMID:24708800
Bosnic-Anticevich, Sinthia Z; Stuart, Meg; Mackson, Judith; Cvetkovski, Biljana; Sainsbury, Erica; Armour, Carol; Mavritsakis, Sofia; Mendrela, Gosia; Travers-Mason, Pippa; Williamson, Margaret
2014-04-07
Inter-professional learning has been promoted as the solution to many clinical management issues. One such issue is the correct use of asthma inhaler devices. Up to 80% of people with asthma use their inhaler device incorrectly. The implications of this are poor asthma control and quality of life. Correct inhaler technique can be taught, however these educational instructions need to be repeated if correct technique is to be maintained. It is important to maximise the opportunities to deliver this education in primary care. In light of this, it is important to explore how health care providers, in particular pharmacists and general medical practitioners, can work together in delivering inhaler technique education to patients, over time. Therefore, there is a need to develop and evaluate effective inter-professional education, which will address the need to educate patients in the correct use of their inhalers as well as equip health care professionals with skills to engage in collaborative relationships with each other. This mixed methods study involves the development and evaluation of three modules of continuing education, Model 1, Model 2 and Model 3. A fourth group, Model 4, acting as a control.Model 1 consists of face-to-face continuing professional education on asthma inhaler technique, aimed at pharmacists, general medical practitioners and their practice nurses.Model 2 is an electronic online continuing education module based on Model 1 principles.Model 3 is also based on asthma inhaler technique education but employs a learning intervention targeting health care professional relationships and is based on sociocultural theory.This study took the form of a parallel group, repeated measure design. Following the completion of continuing professional education, health care professionals recruited people with asthma and followed them up for 6 months. During this period, inhaler device technique training was delivered and data on patient inhaler technique, clinical and humanistic outcomes were collected. Outcomes related to professional collaborative relationships were also measured. Challenges presented included the requirement of significant financial resources for development of study materials and limited availability of validated tools to measure health care professional collaboration over time.
A comparison of algorithms for inference and learning in probabilistic graphical models.
Frey, Brendan J; Jojic, Nebojsa
2005-09-01
Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.
Webster, Victoria A; Nieto, Santiago G; Grosberg, Anna; Akkus, Ozan; Chiel, Hillel J; Quinn, Roger D
2016-10-01
In this study, new techniques for approximating the contractile properties of cells in biohybrid devices using Finite Element Analysis (FEA) have been investigated. Many current techniques for modeling biohybrid devices use individual cell forces to simulate the cellular contraction. However, such techniques result in long simulation runtimes. In this study we investigated the effect of the use of thermal contraction on simulation runtime. The thermal contraction model was significantly faster than models using individual cell forces, making it beneficial for rapidly designing or optimizing devices. Three techniques, Stoney׳s Approximation, a Modified Stoney׳s Approximation, and a Thermostat Model, were explored for calibrating thermal expansion/contraction parameters (TECPs) needed to simulate cellular contraction using thermal contraction. The TECP values were calibrated by using published data on the deflections of muscular thin films (MTFs). Using these techniques, TECP values that suitably approximate experimental deflections can be determined by using experimental data obtained from cardiomyocyte MTFs. Furthermore, a sensitivity analysis was performed in order to investigate the contribution of individual variables, such as elastic modulus and layer thickness, to the final calibrated TECP for each calibration technique. Additionally, the TECP values are applicable to other types of biohybrid devices. Two non-MTF models were simulated based on devices reported in the existing literature. Copyright © 2016 Elsevier Ltd. All rights reserved.
Models and techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1977-01-01
Models, measures and techniques were developed for evaluating the effectiveness of aircraft computing systems. The concept of effectiveness involves aspects of system performance, reliability and worth. Specifically done was a detailed development of model hierarchy at mission, functional task, and computational task levels. An appropriate class of stochastic models was investigated which served as bottom level models in the hierarchial scheme. A unified measure of effectiveness called 'performability' was defined and formulated.
Full waveform inversion of combined towed streamer and limited OBS seismic data: a theoretical study
NASA Astrophysics Data System (ADS)
Yang, Huachen; Zhang, Jianzhong
2018-06-01
In marine seismic oil exploration, full waveform inversion (FWI) of towed-streamer data is used to reconstruct velocity models. However, the FWI of towed-streamer data easily converges to a local minimum solution due to the lack of low-frequency content. In this paper, we propose a new FWI technique using towed-streamer data, its integrated data sets and limited OBS data. Both integrated towed-streamer seismic data and OBS data have low-frequency components. Therefore, at early iterations in the new FWI technique, the OBS data combined with the integrated towed-streamer data sets reconstruct an appropriate background model. And the towed-streamer seismic data play a major role in later iterations to improve the resolution of the model. The new FWI technique is tested on numerical examples. The results show that when starting models are not accurate enough, the models inverted using the new FWI technique are superior to those inverted using conventional FWI.
Calibrating White Dwarf Asteroseismic Fitting Techniques
NASA Astrophysics Data System (ADS)
Castanheira, B. G.; Romero, A. D.; Bischoff-Kim, A.
2017-03-01
The main goal of looking for intrinsic variability in stars is the unique opportunity to study their internal structure. Once we have extracted independent modes from the data, it appears to be a simple matter of comparing the period spectrum with those from theoretical model grids to learn the inner structure of that star. However, asteroseismology is much more complicated than this simple description. We must account not only for observational uncertainties in period determination, but most importantly for the limitations of the model grids, coming from the uncertainties in the constitutive physics, and of the fitting techniques. In this work, we will discuss results of numerical experiments where we used different independently calculated model grids (white dwarf cooling models WDEC and fully evolutionary LPCODE-PUL) and fitting techniques to fit synthetic stars. The advantage of using synthetic stars is that we know the details of their interior structure so we can assess how well our models and fitting techniques are able to the recover the interior structure, as well as the stellar parameters.
Model based Computerized Ionospheric Tomography in space and time
NASA Astrophysics Data System (ADS)
Tuna, Hakan; Arikan, Orhan; Arikan, Feza
2018-04-01
Reconstruction of the ionospheric electron density distribution in space and time not only provide basis for better understanding the physical nature of the ionosphere, but also provide improvements in various applications including HF communication. Recently developed IONOLAB-CIT technique provides physically admissible 3D model of the ionosphere by using both Slant Total Electron Content (STEC) measurements obtained from a GPS satellite - receiver network and IRI-Plas model. IONOLAB-CIT technique optimizes IRI-Plas model parameters in the region of interest such that the synthetic STEC computations obtained from the IRI-Plas model are in accordance with the actual STEC measurements. In this work, the IONOLAB-CIT technique is extended to provide reconstructions both in space and time. This extension exploits the temporal continuity of the ionosphere to provide more reliable reconstructions with a reduced computational load. The proposed 4D-IONOLAB-CIT technique is validated on real measurement data obtained from TNPGN-Active GPS receiver network in Turkey.
Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models
NASA Astrophysics Data System (ADS)
Altuntas, Alper; Baugh, John
2017-07-01
Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.
NASA Astrophysics Data System (ADS)
Mohd Yunos, Zuriahati; Shamsuddin, Siti Mariyam; Ismail, Noriszura; Sallehuddin, Roselina
2013-04-01
Artificial neural network (ANN) with back propagation algorithm (BP) and ANFIS was chosen as an alternative technique in modeling motor insurance claims. In particular, an ANN and ANFIS technique is applied to model and forecast the Malaysian motor insurance data which is categorized into four claim types; third party property damage (TPPD), third party bodily injury (TPBI), own damage (OD) and theft. This study is to determine whether an ANN and ANFIS model is capable of accurately predicting motor insurance claim. There were changes made to the network structure as the number of input nodes, number of hidden nodes and pre-processing techniques are also examined and a cross-validation technique is used to improve the generalization ability of ANN and ANFIS models. Based on the empirical studies, the prediction performance of the ANN and ANFIS model is improved by using different number of input nodes and hidden nodes; and also various sizes of data. The experimental results reveal that the ANFIS model has outperformed the ANN model. Both models are capable of producing a reliable prediction for the Malaysian motor insurance claims and hence, the proposed method can be applied as an alternative to predict claim frequency and claim severity.
Modeling and managing risk early in software development
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Thomas, William M.; Hetmanski, Christopher J.
1993-01-01
In order to improve the quality of the software development process, we need to be able to build empirical multivariate models based on data collectable early in the software process. These models need to be both useful for prediction and easy to interpret, so that remedial actions may be taken in order to control and optimize the development process. We present an automated modeling technique which can be used as an alternative to regression techniques. We show how it can be used to facilitate the identification and aid the interpretation of the significant trends which characterize 'high risk' components in several Ada systems. Finally, we evaluate the effectiveness of our technique based on a comparison with logistic regression based models.
NASA Technical Reports Server (NTRS)
McIlraith, Sheila; Biswas, Gautam; Clancy, Dan; Gupta, Vineet
2005-01-01
This paper reports on an on-going Project to investigate techniques to diagnose complex dynamical systems that are modeled as hybrid systems. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. We cast the diagnosis problem as a model selection problem. To reduce the space of potential models under consideration, we exploit techniques from qualitative reasoning to conjecture an initial set of qualitative candidate diagnoses, which induce a smaller set of models. We refine these diagnoses using parameter estimation and model fitting techniques. As a motivating case study, we have examined the problem of diagnosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.
NASA Astrophysics Data System (ADS)
Shephard, Adam M.; Thomas, Benjamin R.; Coble, Jamie B.; Wood, Houston G.
2018-05-01
This paper presents a development related to the use of minor isotope safeguards techniques (MIST) and the MSTAR cascade model as it relates to the application of international nuclear safeguards at gas centrifuge enrichment plants (GCEPs). The product of this paper is a derivation of the universal and dimensionless MSTAR cascade model. The new model can be used to calculate the minor uranium isotope concentrations in GCEP product and tails streams or to analyze, visualize, and interpret GCEP process data as part of MIST. Applications of the new model include the detection of undeclared feed and withdrawal streams at GCEPs when used in conjunction with UF6 sampling and/or other isotopic measurement techniques.
Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models
NASA Astrophysics Data System (ADS)
Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto
In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.
HSR Model Deformation Measurements from Subsonic to Supersonic Speeds
NASA Technical Reports Server (NTRS)
Burner, A. W.; Erickson, G. E.; Goodman, W. L.; Fleming, G. A.
1999-01-01
This paper describes the video model deformation technique (VMD) used at five NASA facilities and the projection moire interferometry (PMI) technique used at two NASA facilities. Comparisons between the two techniques for model deformation measurements are provided. Facilities at NASA-Ames and NASA-Langley where deformation measurements have been made are presented. Examples of HSR model deformation measurements from the Langley Unitary Wind Tunnel, Langley 16-foot Transonic Wind Tunnel, and the Ames 12-foot Pressure Tunnel are presented. A study to improve and develop new targeting schemes at the National Transonic Facility is also described. The consideration of milled targets for future HSR models is recommended when deformation measurements are expected to be required. Finally, future development work for VMD and PMI is addressed.
Finite element model correlation of a composite UAV wing using modal frequencies
NASA Astrophysics Data System (ADS)
Oliver, Joseph A.; Kosmatka, John B.; Hemez, François M.; Farrar, Charles R.
2007-04-01
The current work details the implementation of a meta-model based correlation technique on a composite UAV wing test piece and associated finite element (FE) model. This method involves training polynomial models to emulate the FE input-output behavior and then using numerical optimization to produce a set of correlated parameters which can be returned to the FE model. After discussions about the practical implementation, the technique is validated on a composite plate structure and then applied to the UAV wing structure, where it is furthermore compared to a more traditional Newton-Raphson technique which iteratively uses first-order Taylor-series sensitivity. The experimental testpiece wing comprises two graphite/epoxy prepreg and Nomex honeycomb co-cured skins and two prepreg spars bonded together in a secondary process. MSC.Nastran FE models of the four structural components are correlated independently, using modal frequencies as correlation features, before being joined together into the assembled structure and compared to experimentally measured frequencies from the assembled wing in a cantilever configuration. Results show that significant improvements can be made to the assembled model fidelity, with the meta-model procedure producing slightly superior results to Newton-Raphson iteration. Final evaluation of component correlation using the assembled wing comparison showed worse results for each correlation technique, with the meta-model technique worse overall. This can be most likely be attributed to difficultly in correlating the open-section spars; however, there is also some question about non-unique update variable combinations in the current configuration, which lead correlation away from physically probably values.
NASA Technical Reports Server (NTRS)
Migneault, Gerard E.
1987-01-01
Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1982-01-01
The status of the initial testing of the modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is reported. The modeling technique utilizes a linear current element representation of the large scale space-current system.
Post-Modeling Histogram Matching of Maps Produced Using Regression Trees
Andrew J. Lister; Tonya W. Lister
2006-01-01
Spatial predictive models often use statistical techniques that in some way rely on averaging of values. Estimates from linear modeling are known to be susceptible to truncation of variance when the independent (predictor) variables are measured with error. A straightforward post-processing technique (histogram matching) for attempting to mitigate this effect is...
Models and techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1978-01-01
Progress in the development of system models and techniques for the formulation and evaluation of aircraft computer system effectiveness is reported. Topics covered include: analysis of functional dependence: a prototype software package, METAPHOR, developed to aid the evaluation of performability; and a comprehensive performability modeling and evaluation exercise involving the SIFT computer.
Equivalent reduced model technique development for nonlinear system dynamic response
NASA Astrophysics Data System (ADS)
Thibault, Louis; Avitabile, Peter; Foley, Jason; Wolfson, Janet
2013-04-01
The dynamic response of structural systems commonly involves nonlinear effects. Often times, structural systems are made up of several components, whose individual behavior is essentially linear compared to the total assembled system. However, the assembly of linear components using highly nonlinear connection elements or contact regions causes the entire system to become nonlinear. Conventional transient nonlinear integration of the equations of motion can be extremely computationally intensive, especially when the finite element models describing the components are very large and detailed. In this work, the equivalent reduced model technique (ERMT) is developed to address complicated nonlinear contact problems. ERMT utilizes a highly accurate model reduction scheme, the System equivalent reduction expansion process (SEREP). Extremely reduced order models that provide dynamic characteristics of linear components, which are interconnected with highly nonlinear connection elements, are formulated with SEREP for the dynamic response evaluation using direct integration techniques. The full-space solution will be compared to the response obtained using drastically reduced models to make evident the usefulness of the technique for a variety of analytical cases.
Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Boyle, Richard D.
2014-01-01
Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Cancer drug discovery: recent innovative approaches to tumor modeling.
Lovitt, Carrie J; Shelper, Todd B; Avery, Vicky M
2016-09-01
Cell culture models have been at the heart of anti-cancer drug discovery programs for over half a century. Advancements in cell culture techniques have seen the rapid evolution of more complex in vitro cell culture models investigated for use in drug discovery. Three-dimensional (3D) cell culture research has become a strong focal point, as this technique permits the recapitulation of the tumor microenvironment. Biologically relevant 3D cellular models have demonstrated significant promise in advancing cancer drug discovery, and will continue to play an increasing role in the future. In this review, recent advances in 3D cell culture techniques and their application in tumor modeling and anti-cancer drug discovery programs are discussed. The topics include selection of cancer cells, 3D cell culture assays (associated endpoint measurements and analysis), 3D microfluidic systems and 3D bio-printing. Although advanced cancer cell culture models and techniques are becoming commonplace in many research groups, the use of these approaches has yet to be fully embraced in anti-cancer drug applications. Furthermore, limitations associated with analyzing information-rich biological data remain unaddressed.
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2016-05-01
Quality control is critical to manufacturing. Frequently, techniques are used to define object conformity bounds, based on historical quality data. This paper considers techniques for bespoke and small batch jobs that are not statistical model based. These techniques also serve jobs where 100% validation is needed due to the mission or safety critical nature of particular parts. One issue with this type of system is alignment discrepancies between the generated model and the physical part. This paper discusses and evaluates techniques for characterizing and correcting alignment issues between the projected and perceived data sets to prevent errors attributable to misalignment.
Model reduction of the numerical analysis of Low Impact Developments techniques
NASA Astrophysics Data System (ADS)
Brunetti, Giuseppe; Šimůnek, Jirka; Wöhling, Thomas; Piro, Patrizia
2017-04-01
Mechanistic models have proven to be accurate and reliable tools for the numerical analysis of the hydrological behavior of Low Impact Development (LIDs) techniques. However, their widespread adoption is limited by their complexity and computational cost. Recent studies have tried to address this issue by investigating the application of new techniques, such as surrogate-based modeling. However, current results are still limited and fragmented. One of such approaches, the Model Order Reduction (MOR) technique, can represent a valuable tool for reducing the computational complexity of a numerical problems by computing an approximation of the original model. While this technique has been extensively used in water-related problems, no studies have evaluated its use in LIDs modeling. Thus, the main aim of this study is to apply the MOR technique for the development of a reduced order model (ROM) for the numerical analysis of the hydrologic behavior of LIDs, in particular green roofs. The model should be able to correctly reproduce all the hydrological processes of a green roof while reducing the computational cost. The proposed model decouples the subsurface water dynamic of a green roof in a) one-dimensional (1D) vertical flow through a green roof itself and b) one-dimensional saturated lateral flow along the impervious rooftop. The green roof is horizontally discretized in N elements. Each element represents a vertical domain, which can have different properties or boundary conditions. The 1D Richards equation is used to simulate flow in the substrate and drainage layers. Simulated outflow from the vertical domain is used as a recharge term for saturated lateral flow, which is described using the kinematic wave approximation of the Boussinesq equation. The proposed model has been compared with the mechanistic model HYDRUS-2D, which numerically solves the Richards equation for the whole domain. The HYDRUS-1D code has been used for the description of vertical flow, while a Finite Volume Scheme has been adopted for lateral flow. Two scenarios involving flat and steep green roofs were analyzed. Results confirmed the accuracy of the reduced order model, which was able to reproduce both subsurface outflow and the moisture distribution in the green roof, significantly reducing the computational cost.
NASA Technical Reports Server (NTRS)
Schmidt, Gordon S.; Mueller, Thomas J.
1987-01-01
The use of flow visualization to study separation bubbles is evaluated. The wind tunnel, two NACA 66(3)-018 airfoil models, and kerosene vapor, titanium tetrachloride, and surface flow visualizations techniques are described. The application of the three visualization techniques to the two airfoil models reveals that the smoke and vapor techniques provide data on the location of laminar separation and the onset of transition, and the surface method produces information about the location of turbulent boundary layer separation. The data obtained with the three flow visualization techniques are compared to pressure distribution data and good correlation is detected. It is noted that flow visualization is an effective technique for examining separation bubbles.
Rocket engine diagnostics using qualitative modeling techniques
NASA Technical Reports Server (NTRS)
Binder, Michael; Maul, William; Meyer, Claudia; Sovie, Amy
1992-01-01
Researchers at NASA Lewis Research Center are presently developing qualitative modeling techniques for automated rocket engine diagnostics. A qualitative model of a turbopump interpropellant seal system has been created. The qualitative model describes the effects of seal failures on the system steady-state behavior. This model is able to diagnose the failure of particular seals in the system based on anomalous temperature and pressure values. The anomalous values input to the qualitative model are generated using numerical simulations. Diagnostic test cases include both single and multiple seal failures.
Rocket engine diagnostics using qualitative modeling techniques
NASA Technical Reports Server (NTRS)
Binder, Michael; Maul, William; Meyer, Claudia; Sovie, Amy
1992-01-01
Researchers at NASA Lewis Research Center are presently developing qualitative modeling techniques for automated rocket engine diagnostics. A qualitative model of a turbopump interpropellant seal system was created. The qualitative model describes the effects of seal failures on the system steady state behavior. This model is able to diagnose the failure of particular seals in the system based on anomalous temperature and pressure values. The anomalous values input to the qualitative model are generated using numerical simulations. Diagnostic test cases include both single and multiple seal failures.
Murray, Louise; Mason, Joshua; Henry, Ann M; Hoskin, Peter; Siebert, Frank-Andre; Venselaar, Jack; Bownes, Peter
2016-08-01
To estimate the risks of radiation-induced rectal and bladder cancers following low dose rate (LDR) and high dose rate (HDR) brachytherapy as monotherapy for localised prostate cancer and compare to external beam radiotherapy techniques. LDR and HDR brachytherapy monotherapy plans were generated for three prostate CT datasets. Second cancer risks were assessed using Schneider's concept of organ equivalent dose. LDR risks were assessed according to a mechanistic model and a bell-shaped model. HDR risks were assessed according to a bell-shaped model. Relative risks and excess absolute risks were estimated and compared to external beam techniques. Excess absolute risks of second rectal or bladder cancer were low for both LDR (irrespective of the model used for calculation) and HDR techniques. Average excess absolute risks of rectal cancer for LDR brachytherapy according to the mechanistic model were 0.71 per 10,000 person-years (PY) and 0.84 per 10,000 PY respectively, and according to the bell-shaped model, were 0.47 and 0.78 per 10,000 PY respectively. For HDR, the average excess absolute risks for second rectal and bladder cancers were 0.74 and 1.62 per 10,000 PY respectively. The absolute differences between techniques were very low and clinically irrelevant. Compared to external beam prostate radiotherapy techniques, LDR and HDR brachytherapy resulted in the lowest risks of second rectal and bladder cancer. This study shows both LDR and HDR brachytherapy monotherapy result in low estimated risks of radiation-induced rectal and bladder cancer. LDR resulted in lower bladder cancer risks than HDR, and lower or similar risks of rectal cancer. In absolute terms these differences between techniques were very small. Compared to external beam techniques, second rectal and bladder cancer risks were lowest for brachytherapy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
2014-11-01
39–44) has been explored in depth in the literature. Of particular interest for this study are investigations into roll control. Isolating the...Control Performance, Aerodynamic Modeling, and Validation of Coupled Simulation Techniques for Guided Projectile Roll Dynamics by Jubaraj...Simulation Techniques for Guided Projectile Roll Dynamics Jubaraj Sahu, Frank Fresconi, and Karen R. Heavey Weapons and Materials Research
Technique for Early Reliability Prediction of Software Components Using Behaviour Models
Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad
2016-01-01
Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748
Applied Algebra: The Modeling Technique of Least Squares
ERIC Educational Resources Information Center
Zelkowski, Jeremy; Mayes, Robert
2008-01-01
The article focuses on engaging students in algebra through modeling real-world problems. The technique of least squares is explored, encouraging students to develop a deeper understanding of the method. (Contains 2 figures and a bibliography.)
Machine learning modelling for predicting soil liquefaction susceptibility
NASA Astrophysics Data System (ADS)
Samui, P.; Sitharam, T. G.
2011-01-01
This study describes two machine learning techniques applied to predict liquefaction susceptibility of soil based on the standard penetration test (SPT) data from the 1999 Chi-Chi, Taiwan earthquake. The first machine learning technique which uses Artificial Neural Network (ANN) based on multi-layer perceptions (MLP) that are trained with Levenberg-Marquardt backpropagation algorithm. The second machine learning technique uses the Support Vector machine (SVM) that is firmly based on the theory of statistical learning theory, uses classification technique. ANN and SVM have been developed to predict liquefaction susceptibility using corrected SPT [(N1)60] and cyclic stress ratio (CSR). Further, an attempt has been made to simplify the models, requiring only the two parameters [(N1)60 and peck ground acceleration (amax/g)], for the prediction of liquefaction susceptibility. The developed ANN and SVM models have also been applied to different case histories available globally. The paper also highlights the capability of the SVM over the ANN models.
An assessment of finite-element modeling techniques for thick-solid/thin-shell joints analysis
NASA Technical Reports Server (NTRS)
Min, J. B.; Androlake, S. G.
1993-01-01
The subject of finite-element modeling has long been of critical importance to the practicing designer/analyst who is often faced with obtaining an accurate and cost-effective structural analysis of a particular design. Typically, these two goals are in conflict. The purpose is to discuss the topic of finite-element modeling for solid/shell connections (joints) which are significant for the practicing modeler. Several approaches are currently in use, but frequently various assumptions restrict their use. Such techniques currently used in practical applications were tested, especially to see which technique is the most ideally suited for the computer aided design (CAD) environment. Some basic thoughts regarding each technique are also discussed. As a consequence, some suggestions based on the results are given to lead reliable results in geometrically complex joints where the deformation and stress behavior are complicated.
NASA Astrophysics Data System (ADS)
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
Xu, Daolin; Lu, Fangfang
2006-12-01
We address the problem of reconstructing a set of nonlinear differential equations from chaotic time series. A method that combines the implicit Adams integration and the structure-selection technique of an error reduction ratio is proposed for system identification and corresponding parameter estimation of the model. The structure-selection technique identifies the significant terms from a pool of candidates of functional basis and determines the optimal model through orthogonal characteristics on data. The technique with the Adams integration algorithm makes the reconstruction available to data sampled with large time intervals. Numerical experiment on Lorenz and Rossler systems shows that the proposed strategy is effective in global vector field reconstruction from noisy time series.
Jitter model and signal processing techniques for pulse width modulation optical recording
NASA Technical Reports Server (NTRS)
Liu, Max M.-K.
1991-01-01
A jitter model and signal processing techniques are discussed for data recovery in Pulse Width Modulation (PWM) optical recording. In PWM, information is stored through modulating sizes of sequential marks alternating in magnetic polarization or in material structure. Jitter, defined as the deviation from the original mark size in the time domain, will result in error detection if it is excessively large. A new approach is taken in data recovery by first using a high speed counter clock to convert time marks to amplitude marks, and signal processing techniques are used to minimize jitter according to the jitter model. The signal processing techniques include motor speed and intersymbol interference equalization, differential and additive detection, and differential and additive modulation.
NASA Astrophysics Data System (ADS)
McCray, Wilmon Wil L., Jr.
The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization model and dashboard that demonstrates the use of statistical methods, statistical process control, sensitivity analysis, quantitative and optimization techniques to establish a baseline and predict future customer satisfaction index scores (outcomes). The American Customer Satisfaction Index (ACSI) model and industry benchmarks were used as a framework for the simulation model.
Spatiotemporal stochastic models for earth science and engineering applications
NASA Astrophysics Data System (ADS)
Luo, Xiaochun
1998-12-01
Spatiotemporal processes occur in many areas of earth sciences and engineering. However, most of the available theoretical tools and techniques of space-time daft processing have been designed to operate exclusively in time or in space, and the importance of spatiotemporal variability was not fully appreciated until recently. To address this problem, a systematic framework of spatiotemporal random field (S/TRF) models for geoscience/engineering applications is presented and developed in this thesis. The space-tune continuity characterization is one of the most important aspects in S/TRF modelling, where the space-time continuity is displayed with experimental spatiotemporal variograms, summarized in terms of space-time continuity hypotheses, and modelled using spatiotemporal variogram functions. Permissible spatiotemporal covariance/variogram models are addressed through permissibility criteria appropriate to spatiotemporal processes. The estimation of spatiotemporal processes is developed in terms of spatiotemporal kriging techniques. Particular emphasis is given to the singularity analysis of spatiotemporal kriging systems. The impacts of covariance, functions, trend forms, and data configurations on the singularity of spatiotemporal kriging systems are discussed. In addition, the tensorial invariance of universal spatiotemporal kriging systems is investigated in terms of the space-time trend. The conditional simulation of spatiotemporal processes is proposed with the development of the sequential group Gaussian simulation techniques (SGGS), which is actually a series of sequential simulation algorithms associated with different group sizes. The simulation error is analyzed with different covariance models and simulation grids. The simulated annealing technique honoring experimental variograms, is also proposed, providing a way of conditional simulation without the covariance model fitting which is prerequisite for most simulation algorithms. The proposed techniques were first applied for modelling of the pressure system in a carbonate reservoir, and then applied for modelling of springwater contents in the Dyle watershed. The results of these case studies as well as the theory suggest that these techniques are realistic and feasible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, O.
The development of the Zeeman–Doppler Imaging (ZDI) technique has provided synoptic observations of surface magnetic fields of low-mass stars. This led the stellar astrophysics community to adopt modeling techniques that have been used in solar physics using solar magnetograms. However, many of these techniques have been neglected by the solar community due to their failure to reproduce solar observations. Nevertheless, some of these techniques are still used to simulate the coronae and winds of solar analogs. Here we present a comparative study between two MHD models for the solar corona and solar wind. The first type of model is amore » polytropic wind model, and the second is the physics-based AWSOM model. We show that while the AWSOM model consistently reproduces many solar observations, the polytropic model fails to reproduce many of them, and in the cases where it does, its solutions are unphysical. Our recommendation is that polytropic models, which are used to estimate mass-loss rates and other parameters of solar analogs, must first be calibrated with solar observations. Alternatively, these models can be calibrated with models that capture more detailed physics of the solar corona (such as the AWSOM model) and that can reproduce solar observations in a consistent manner. Without such a calibration, the results of the polytropic models cannot be validated, but they can be wrongly used by others.« less
Ab Initio Studies of Shock-Induced Chemical Reactions of Inter-Metallics
NASA Astrophysics Data System (ADS)
Zaharieva, Roussislava; Hanagud, Sathya
2009-06-01
Shock-induced and shock assisted chemical reactions of intermetallic mixtures are studied by many researchers, using both experimental and theoretical techniques. The theoretical studies are primarily at continuum scales. The model frameworks include mixture theories and meso-scale models of grains of porous mixtures. The reaction models vary from equilibrium thermodynamic model to several non-equilibrium thermodynamic models. The shock-effects are primarily studied using appropriate conservation equations and numerical techniques to integrate the equations. All these models require material constants from experiments and estimates of transition states. Thus, the objective of this paper is to present studies based on ab initio techniques. The ab inito studies, to date, use ab inito molecular dynamics. This paper presents a study that uses shock pressures, and associated temperatures as starting variables. Then intermetallic mixtures are modeled as slabs. The required shock stresses are created by straining the lattice. Then, ab initio binding energy calculations are used to examine the stability of the reactions. Binding energies are obtained for different strain components super imposed on uniform compression and finite temperatures. Then, vibrational frequencies and nudge elastic band techniques are used to study reactivity and transition states. Examples include Ni and Al.
Computerized technique for recording board defect data
R. Bruce Anderson; R. Edward Thomas; Charles J. Gatchell; Neal D. Bennett; Neal D. Bennett
1993-01-01
A computerized technique for recording board defect data has been developed that is faster and more accurate than manual techniques. The lumber database generated by this technique is a necessary input to computer simulation models that estimate potential cutting yields from various lumber breakdown sequences. The technique allows collection of detailed information...
Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Anissipour, Amir A.; Benson, Russell A.
1989-01-01
The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.
Modeling 3-D objects with planar surfaces for prediction of electromagnetic scattering
NASA Technical Reports Server (NTRS)
Koch, M. B.; Beck, F. B.; Cockrell, C. R.
1992-01-01
Electromagnetic scattering analysis of objects at resonance is difficult because low frequency techniques are slow and computer intensive, and high frequency techniques may not be reliable. A new technique for predicting the electromagnetic backscatter from electrically conducting objects at resonance is studied. This technique is based on modeling three dimensional objects as a combination of flat plates where some of the plates are blocking the scattering from others. A cube is analyzed as a simple example. The preliminary results compare well with the Geometrical Theory of Diffraction and with measured data.
Integrated geostatistics for modeling fluid contacts and shales in Prudhoe Bay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez, G.; Chopra, A.K.; Severson, C.D.
1997-12-01
Geostatistics techniques are being used increasingly to model reservoir heterogeneity at a wide range of scales. A variety of techniques is now available with differing underlying assumptions, complexity, and applications. This paper introduces a novel method of geostatistics to model dynamic gas-oil contacts and shales in the Prudhoe Bay reservoir. The method integrates reservoir description and surveillance data within the same geostatistical framework. Surveillance logs and shale data are transformed to indicator variables. These variables are used to evaluate vertical and horizontal spatial correlation and cross-correlation of gas and shale at different times and to develop variogram models. Conditional simulationmore » techniques are used to generate multiple three-dimensional (3D) descriptions of gas and shales that provide a measure of uncertainty. These techniques capture the complex 3D distribution of gas-oil contacts through time. The authors compare results of the geostatistical method with conventional techniques as well as with infill wells drilled after the study. Predicted gas-oil contacts and shale distributions are in close agreement with gas-oil contacts observed at infill wells.« less
Building Energy Modeling and Control Methods for Optimization and Renewables Integration
NASA Astrophysics Data System (ADS)
Burger, Eric M.
This dissertation presents techniques for the numerical modeling and control of building systems, with an emphasis on thermostatically controlled loads. The primary objective of this work is to address technical challenges related to the management of energy use in commercial and residential buildings. This work is motivated by the need to enhance the performance of building systems and by the potential for aggregated loads to perform load following and regulation ancillary services, thereby enabling the further adoption of intermittent renewable energy generation technologies. To increase the generalizability of the techniques, an emphasis is placed on recursive and adaptive methods which minimize the need for customization to specific buildings and applications. The techniques presented in this dissertation can be divided into two general categories: modeling and control. Modeling techniques encompass the processing of data streams from sensors and the training of numerical models. These models enable us to predict the energy use of a building and of sub-systems, such as a heating, ventilation, and air conditioning (HVAC) unit. Specifically, we first present an ensemble learning method for the short-term forecasting of total electricity demand in buildings. As the deployment of intermittent renewable energy resources continues to rise, the generation of accurate building-level electricity demand forecasts will be valuable to both grid operators and building energy management systems. Second, we present a recursive parameter estimation technique for identifying a thermostatically controlled load (TCL) model that is non-linear in the parameters. For TCLs to perform demand response services in real-time markets, online methods for parameter estimation are needed. Third, we develop a piecewise linear thermal model of a residential building and train the model using data collected from a custom-built thermostat. This model is capable of approximating unmodeled dynamics within a building by learning from sensor data. Control techniques encompass the application of optimal control theory, model predictive control, and convex distributed optimization to TCLs. First, we present the alternative control trajectory (ACT) representation, a novel method for the approximate optimization of non-convex discrete systems. This approach enables the optimal control of a population of non-convex agents using distributed convex optimization techniques. Second, we present a distributed convex optimization algorithm for the control of a TCL population. Experimental results demonstrate the application of this algorithm to the problem of renewable energy generation following. This dissertation contributes to the development of intelligent energy management systems for buildings by presenting a suite of novel and adaptable modeling and control techniques. Applications focus on optimizing the performance of building operations and on facilitating the integration of renewable energy resources.
A VAS-numerical model impact study using the Gal-Chen variational approach
NASA Technical Reports Server (NTRS)
Aune, Robert M.; Tuccillo, James J.; Uccellini, Louis W.; Petersen, Ralph A.
1987-01-01
A numerical study based on the use of a variational assimilation technique of Gal-Chen (1983, 1986) was conducted to assess the impact of incorporating temperature data from the VISSR Atmospheric Sounder (VAS) into a regional-scale numerical model. A comparison with the results of a control forecast using only conventional data indicated that the assimilation technique successfully combines actual VAS temperature observations with the dynamically balanced model fields without destabilizing the model during the assimilation cycle. Moreover, increasing the temporal frequency of VAS temperature insertions during the assimilation cycle was shown to enhance the impact on the model forecast through successively longer forecast periods. The incorporation of a nudging technique, whereby the model temperature field is constrained toward the VAS 'updated' values during the assimilation cycle, further enhances the impact of the VAS temperature data.
Software Safety Analysis of a Flight Guidance System
NASA Technical Reports Server (NTRS)
Butler, Ricky W. (Technical Monitor); Tribble, Alan C.; Miller, Steven P.; Lempia, David L.
2004-01-01
This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.
Efficient Global Aerodynamic Modeling from Flight Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2012-01-01
A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2011-01-01
A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157
DSN system performance test Doppler noise models; noncoherent configuration
NASA Technical Reports Server (NTRS)
Bunce, R.
1977-01-01
The newer model for variance, the Allan technique, now adopted for testing, is analyzed in the subject mode. A model is generated (including considerable contribution from the station secondary frequency standard), and rationalized with existing data. The variance model is definitely sound; the Allan technique mates theory and measure. The mean-frequency model is an estimate; this problem is yet to be rigorously resolved. The unaltered defining expressions are noncovergent, and the observed mean is quite erratic.
Kim, Sung-Jae; Chun, Yong-Min; Kim, Sung-Hwan; Moon, Hong-Kyo; Jang, Jae-Won
2013-07-01
The purpose of this study was to compare four graft-tunnel angles (GTA), the femoral GTA formed by three different femoral tunneling techniques (the outside-in, a modified inside-out technique in the posterior sag position with knee hyperflexion, and the conventional inside-out technique) and the tibia GTA in 3-dimensional (3D) knee flexion models, as well as to examine the influence of femoral tunneling techniques on the contact pressure between the intra-articular aperture of the femoral tunnel and the graft. Twelve cadaveric knees were tested. Computed tomography scans were performed at different knee flexion angles (0°, 45°, 90°, and 120°). Femoral and tibial GTAs were measured at different knee flexion angles on the 3D knee models. Using pressure sensitive films, stress on the graft of the angulation of the femoral tunnel aperture was measured in posterior cruciate ligament reconstructed cadaveric knees. Between 45° and 120° of knee flexion, there were no significant differences between the outside-in and modified inside-out techniques. However, the femoral GTA for the conventional inside-out technique was significantly less than that for the other two techniques (p<0.001). In cadaveric experiments using pressure-sensitive film, the maximum contact pressure for the modified inside-out and outside-in technique was significantly lower than that for the conventional inside-out technique (p=0.024 and p=0.017). The conventional inside-out technique results in a significantly lesser GTA and higher stress at the intra-articular aperture of the femoral tunnel than the outside-in technique. However, the results for the modified inside-out technique are similar to those for the outside-in technique.
NASA Astrophysics Data System (ADS)
Kumar, Shashi; Khati, Unmesh G.; Chandola, Shreya; Agrawal, Shefali; Kushwaha, Satya P. S.
2017-08-01
The regulation of the carbon cycle is a critical ecosystem service provided by forests globally. It is, therefore, necessary to have robust techniques for speedy assessment of forest biophysical parameters at the landscape level. It is arduous and time taking to monitor the status of vast forest landscapes using traditional field methods. Remote sensing and GIS techniques are efficient tools that can monitor the health of forests regularly. Biomass estimation is a key parameter in the assessment of forest health. Polarimetric SAR (PolSAR) remote sensing has already shown its potential for forest biophysical parameter retrieval. The current research work focuses on the retrieval of forest biophysical parameters of tropical deciduous forest, using fully polarimetric spaceborne C-band data with Polarimetric SAR Interferometry (PolInSAR) techniques. PolSAR based Interferometric Water Cloud Model (IWCM) has been used to estimate aboveground biomass (AGB). Input parameters to the IWCM have been extracted from the decomposition modeling of SAR data as well as PolInSAR coherence estimation. The technique of forest tree height retrieval utilized PolInSAR coherence based modeling approach. Two techniques - Coherence Amplitude Inversion (CAI) and Three Stage Inversion (TSI) - for forest height estimation are discussed, compared and validated. These techniques allow estimation of forest stand height and true ground topography. The accuracy of the forest height estimated is assessed using ground-based measurements. PolInSAR based forest height models showed enervation in the identification of forest vegetation and as a result height values were obtained in river channels and plain areas. Overestimation in forest height was also noticed at several patches of the forest. To overcome this problem, coherence and backscatter based threshold technique is introduced for forest area identification and accurate height estimation in non-forested regions. IWCM based modeling for forest AGB retrieval showed R2 value of 0.5, RMSE of 62.73 (t ha-1) and a percent accuracy of 51%. TSI based PolInSAR inversion modeling showed the most accurate result for forest height estimation. The correlation between the field measured forest height and the estimated tree height using TSI technique is 62% with an average accuracy of 91.56% and RMSE of 2.28 m. The study suggested that PolInSAR coherence based modeling approach has significant potential for retrieval of forest biophysical parameters.
Impact of multicollinearity on small sample hydrologic regression models
NASA Astrophysics Data System (ADS)
Kroll, Charles N.; Song, Peter
2013-06-01
Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.
Attallah, Omneya; Karthikesalingam, Alan; Holt, Peter J E; Thompson, Matthew M; Sayers, Rob; Bown, Matthew J; Choke, Eddie C; Ma, Xianghong
2017-08-03
Feature selection (FS) process is essential in the medical area as it reduces the effort and time needed for physicians to measure unnecessary features. Choosing useful variables is a difficult task with the presence of censoring which is the unique characteristic in survival analysis. Most survival FS methods depend on Cox's proportional hazard model; however, machine learning techniques (MLT) are preferred but not commonly used due to censoring. Techniques that have been proposed to adopt MLT to perform FS with survival data cannot be used with the high level of censoring. The researcher's previous publications proposed a technique to deal with the high level of censoring. It also used existing FS techniques to reduce dataset dimension. However, in this paper a new FS technique was proposed and combined with feature transformation and the proposed uncensoring approaches to select a reduced set of features and produce a stable predictive model. In this paper, a FS technique based on artificial neural network (ANN) MLT is proposed to deal with highly censored Endovascular Aortic Repair (EVAR). Survival data EVAR datasets were collected during 2004 to 2010 from two vascular centers in order to produce a final stable model. They contain almost 91% of censored patients. The proposed approach used a wrapper FS method with ANN to select a reduced subset of features that predict the risk of EVAR re-intervention after 5 years to patients from two different centers located in the United Kingdom, to allow it to be potentially applied to cross-centers predictions. The proposed model is compared with the two popular FS techniques; Akaike and Bayesian information criteria (AIC, BIC) that are used with Cox's model. The final model outperforms other methods in distinguishing the high and low risk groups; as they both have concordance index and estimated AUC better than the Cox's model based on AIC, BIC, Lasso, and SCAD approaches. These models have p-values lower than 0.05, meaning that patients with different risk groups can be separated significantly and those who would need re-intervention can be correctly predicted. The proposed approach will save time and effort made by physicians to collect unnecessary variables. The final reduced model was able to predict the long-term risk of aortic complications after EVAR. This predictive model can help clinicians decide patients' future observation plan.
Review of the systems biology of the immune system using agent-based models.
Shinde, Snehal B; Kurhekar, Manish P
2018-06-01
The immune system is an inherent protection system in vertebrate animals including human beings that exhibit properties such as self-organisation, self-adaptation, learning, and recognition. It interacts with the other allied systems such as the gut and lymph nodes. There is a need for immune system modelling to know about its complex internal mechanism, to understand how it maintains the homoeostasis, and how it interacts with the other systems. There are two types of modelling techniques used for the simulation of features of the immune system: equation-based modelling (EBM) and agent-based modelling. Owing to certain shortcomings of the EBM, agent-based modelling techniques are being widely used. This technique provides various predictions for disease causes and treatments; it also helps in hypothesis verification. This study presents a review of agent-based modelling of the immune system and its interactions with the gut and lymph nodes. The authors also review the modelling of immune system interactions during tuberculosis and cancer. In addition, they also outline the future research directions for the immune system simulation through agent-based techniques such as the effects of stress on the immune system, evolution of the immune system, and identification of the parameters for a healthy immune system.
NASA Astrophysics Data System (ADS)
Mathai, Pramod P.
This thesis focuses on applying and augmenting 'Reduced Order Modeling' (ROM) techniques to large scale problems. ROM refers to the set of mathematical techniques that are used to reduce the computational expense of conventional modeling techniques, like finite element and finite difference methods, while minimizing the loss of accuracy that typically accompanies such a reduction. The first problem that we address pertains to the prediction of the level of heat dissipation in electronic and MEMS devices. With the ever decreasing feature sizes in electronic devices, and the accompanied rise in Joule heating, the electronics industry has, since the 1990s, identified a clear need for computationally cheap heat transfer modeling techniques that can be incorporated along with the electronic design process. We demonstrate how one can create reduced order models for simulating heat conduction in individual components that constitute an idealized electronic device. The reduced order models are created using Krylov Subspace Techniques (KST). We introduce a novel 'plug and play' approach, based on the small gain theorem in control theory, to interconnect these component reduced order models (according to the device architecture) to reliably and cheaply replicate whole device behavior. The final aim is to have this technique available commercially as a computationally cheap and reliable option that enables a designer to optimize for heat dissipation among competing VLSI architectures. Another place where model reduction is crucial to better design is Isoelectric Focusing (IEF) - the second problem in this thesis - which is a popular technique that is used to separate minute amounts of proteins from the other constituents that are present in a typical biological tissue sample. Fundamental questions about how to design IEF experiments still remain because of the high dimensional and highly nonlinear nature of the differential equations that describe the IEF process as well as the uncertainty in the parameters of the differential equations. There is a clear need to design better experiments for IEF without the current overhead of expensive chemicals and labor. We show how with a simpler modeling of the underlying chemistry, we can still achieve the accuracy that has been achieved in existing literature for modeling small ranges of pH (hydrogen ion concentration) in IEF, but with far less computational time. We investigate a further reduction of time by modeling the IEF problem using the Proper Orthogonal Decomposition (POD) technique and show why POD may not be sufficient due to the underlying constraints. The final problem that we address in this thesis addresses a certain class of dynamics with high stiffness - in particular, differential algebraic equations. With the help of simple examples, we show how the traditional POD procedure will fail to model certain high stiffness problems due to a particular behavior of the vector field which we will denote as twist. We further show how a novel augmentation to the traditional POD algorithm can model-reduce problems with twist in a computationally cheap manner without any additional data requirements.
ERIC Educational Resources Information Center
Komis, Vassilis; Ergazaki, Marida; Zogza, Vassiliki
2007-01-01
This study aims at highlighting the collaborative activity of two high school students (age 14) in the cases of modeling the complex biological process of plant growth with two different tools: the "paper & pencil" concept mapping technique and the computer-supported educational environment "ModelsCreator". Students' shared activity in both cases…
ERIC Educational Resources Information Center
Muth, Chelsea; Bales, Karen L.; Hinde, Katie; Maninger, Nicole; Mendoza, Sally P.; Ferrer, Emilio
2016-01-01
Unavoidable sample size issues beset psychological research that involves scarce populations or costly laboratory procedures. When incorporating longitudinal designs these samples are further reduced by traditional modeling techniques, which perform listwise deletion for any instance of missing data. Moreover, these techniques are limited in their…
USDA-ARS?s Scientific Manuscript database
The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...
Predicting School Enrollments Using the Modified Regression Technique.
ERIC Educational Resources Information Center
Grip, Richard S.; Young, John W.
This report is based on a study in which a regression model was constructed to increase accuracy in enrollment predictions. A model, known as the Modified Regression Technique (MRT), was used to examine K-12 enrollment over the past 20 years in 2 New Jersey school districts of similar size and ethnicity. To test the model's accuracy, MRT was…
Techniques for Down-Sampling a Measured Surface Height Map for Model Validation
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces
Abstraction Techniques for Parameterized Verification
2006-11-01
approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop...36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . . 36 2.5.2 Adding Monitor Processes...model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite
Wathen, Brent; Kuiper, Michael; Walker, Virginia; Jia, Zongchao
2003-01-22
A novel computational technique for modeling crystal formation has been developed that combines three-dimensional (3-D) molecular representation and detailed energetics calculations of molecular mechanics techniques with the less-sophisticated probabilistic approach used by statistical techniques to study systems containing millions of molecules undergoing billions of interactions. Because our model incorporates both the structure of and the interaction energies between participating molecules, it enables the 3-D shape and surface properties of these molecules to directly affect crystal formation. This increase in model complexity has been achieved while simultaneously increasing the number of molecules in simulations by several orders of magnitude over previous statistical models. We have applied this technique to study the inhibitory effects of antifreeze proteins (AFPs) on ice-crystal formation. Modeling involving both fish and insect AFPs has produced results consistent with experimental observations, including the replication of ice-etching patterns, ice-growth inhibition, and specific AFP-induced ice morphologies. Our work suggests that the degree of AFP activity results more from AFP ice-binding orientation than from AFP ice-binding strength. This technique could readily be adapted to study other crystal and crystal inhibitor systems, or to study other noncrystal systems that exhibit regularity in the structuring of their component molecules, such as those associated with the new nanotechnologies.
Quantitative model validation of manipulative robot systems
NASA Astrophysics Data System (ADS)
Kartowisastro, Iman Herwidiana
This thesis is concerned with applying the distortion quantitative validation technique to a robot manipulative system with revolute joints. Using the distortion technique to validate a model quantitatively, the model parameter uncertainties are taken into account in assessing the faithfulness of the model and this approach is relatively more objective than the commonly visual comparison method. The industrial robot is represented by the TQ MA2000 robot arm. Details of the mathematical derivation of the distortion technique are given which explains the required distortion of the constant parameters within the model and the assessment of model adequacy. Due to the complexity of a robot model, only the first three degrees of freedom are considered where all links are assumed rigid. The modelling involves the Newton-Euler approach to obtain the dynamics model, and the Denavit-Hartenberg convention is used throughout the work. The conventional feedback control system is used in developing the model. The system behavior to parameter changes is investigated as some parameters are redundant. This work is important so that the most important parameters to be distorted can be selected and this leads to a new term called the fundamental parameters. The transfer function approach has been chosen to validate an industrial robot quantitatively against the measured data due to its practicality. Initially, the assessment of the model fidelity criterion indicated that the model was not capable of explaining the transient record in term of the model parameter uncertainties. Further investigations led to significant improvements of the model and better understanding of the model properties. After several improvements in the model, the fidelity criterion obtained was almost satisfied. Although the fidelity criterion is slightly less than unity, it has been shown that the distortion technique can be applied in a robot manipulative system. Using the validated model, the importance of friction terms in the model was highlighted with the aid of the partition control technique. It was also shown that the conventional feedback control scheme was insufficient for a robot manipulative system due to high nonlinearity which was inherent in the robot manipulator.
Finite Volume Numerical Methods for Aeroheating Rate Calculations from Infrared Thermographic Data
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Berry, Scott A.; Horvath, Thomas J.; Nowak, Robert J.
2006-01-01
The use of multi-dimensional finite volume heat conduction techniques for calculating aeroheating rates from measured global surface temperatures on hypersonic wind tunnel models was investigated. Both direct and inverse finite volume techniques were investigated and compared with the standard one-dimensional semi-infinite technique. Global transient surface temperatures were measured using an infrared thermographic technique on a 0.333-scale model of the Hyper-X forebody in the NASA Langley Research Center 20-Inch Mach 6 Air tunnel. In these tests the effectiveness of vortices generated via gas injection for initiating hypersonic transition on the Hyper-X forebody was investigated. An array of streamwise-orientated heating striations was generated and visualized downstream of the gas injection sites. In regions without significant spatial temperature gradients, one-dimensional techniques provided accurate aeroheating rates. In regions with sharp temperature gradients caused by striation patterns multi-dimensional heat transfer techniques were necessary to obtain more accurate heating rates. The use of the one-dimensional technique resulted in differences of 20% in the calculated heating rates compared to 2-D analysis because it did not account for lateral heat conduction in the model.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
Research on an augmented Lagrangian penalty function algorithm for nonlinear programming
NASA Technical Reports Server (NTRS)
Frair, L.
1978-01-01
The augmented Lagrangian (ALAG) Penalty Function Algorithm for optimizing nonlinear mathematical models is discussed. The mathematical models of interest are deterministic in nature and finite dimensional optimization is assumed. A detailed review of penalty function techniques in general and the ALAG technique in particular is presented. Numerical experiments are conducted utilizing a number of nonlinear optimization problems to identify an efficient ALAG Penalty Function Technique for computer implementation.
Correcting for deformation in skin-based marker systems.
Alexander, E J; Andriacchi, T P
2001-03-01
A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.
New analytical technique for carbon dioxide absorption solvents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pouryousefi, F.; Idem, R.O.
2008-02-15
The densities and refractive indices of two binary systems (water + MEA and water + MDEA) and three ternary systems (water + MEA + CO{sub 2}, water + MDEA + CO{sub 2}, and water + MEA + MDEA) used for carbon dioxide (CO{sub 2}) capture were measured over the range of compositions of the aqueous alkanolamine(s) used for CO{sub 2} absorption at temperatures from 295 to 338 K. Experimental densities were modeled empirically, while the experimental refractive indices were modeled using well-established models from the known values of their pure-component densities and refractive indices. The density and Gladstone-Dale refractive indexmore » models were then used to obtain the compositions of unknown samples of the binary and ternary systems by simultaneous solution of the density and refractive index equations. The results from this technique have been compared with HPLC (high-performance liquid chromatography) results, while a third independent technique (acid-base titration) was used to verify the results. The results show that the systems' compositions obtained from the simple and easy-to-use refractive index/density technique were very comparable to the expensive and laborious HPLC/titration techniques, suggesting that the refractive index/density technique can be used to replace existing methods for analysis of fresh or nondegraded, CO{sub 2}-loaded, single and mixed alkanolamine solutions.« less
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
Innovative application of virtual display technique in virtual museum
NASA Astrophysics Data System (ADS)
Zhang, Jiankang
2017-09-01
Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.
A Biomechanical Modeling Guided CBCT Estimation Technique
Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing
2017-01-01
Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks. PMID:27831866
A simple white noise analysis of neuronal light responses.
Chichilnisky, E J
2001-05-01
A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.
A reduced order, test verified component mode synthesis approach for system modeling applications
NASA Astrophysics Data System (ADS)
Butland, Adam; Avitabile, Peter
2010-05-01
Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.
APPLICATION OF STABLE ISOTOPE TECHNIQUES TO AIR POLLUTION RESEARCH
Stable isotope techniques provide a robust, yet under-utilized tool for examining pollutant effects on plant growth and ecosystem function. Here, we survey a range of mixing model, physiological and system level applications for documenting pollutant effects. Mixing model examp...
ELECTRICAL RESISTIVITY TECHNIQUE TO ASSESS THE INTEGRITY OF GEOMEMBRANE LINERS
Two-dimensional electrical modeling of a liner system was performed using computer techniques. The modeling effort examined the voltage distributions in cross sections of lined facilities with different leak locations. Results confirmed that leaks in the liner influenced voltage ...
Novel Plasmonic and Hyberbolic Optical Materials for Control of Quantum Nanoemitters
2016-12-08
properties, metal ion implantation techniques, and multi- physics modeling to produce hyperbolic quantum nanoemitters. 15. SUBJECT TERMS nanotechnology 16...techniques, and multi- physics modeling to produce hyperbolic quantum nanoemitters. During the course of this project we studied plasmonic
An operator calculus for surface and volume modeling
NASA Technical Reports Server (NTRS)
Gordon, W. J.
1984-01-01
The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.
Large Terrain Modeling and Visualization for Planets
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan; Cameron, Jonathan; Lim, Christopher
2011-01-01
Physics-based simulations are actively used in the design, testing, and operations phases of surface and near-surface planetary space missions. One of the challenges in realtime simulations is the ability to handle large multi-resolution terrain data sets within models as well as for visualization. In this paper, we describe special techniques that we have developed for visualization, paging, and data storage for dealing with these large data sets. The visualization technique uses a real-time GPU-based continuous level-of-detail technique that delivers multiple frames a second performance even for planetary scale terrain model sizes.
Collisional-radiative switching - A powerful technique for converging non-LTE calculations
NASA Technical Reports Server (NTRS)
Hummer, D. G.; Voels, S. A.
1988-01-01
A very simple technique has been developed to converge statistical equilibrium and model atmospheric calculations in extreme non-LTE conditions when the usual iterative methods fail to converge from an LTE starting model. The proposed technique is based on a smooth transition from a collision-dominated LTE situation to the desired non-LTE conditions in which radiation dominates, at least in the most important transitions. The proposed approach was used to successfully compute stellar models with He abundances of 0.20, 0.30, and 0.50; Teff = 30,000 K, and log g = 2.9.
Prediction of drug synergy in cancer using ensemble-based machine learning techniques
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder
2018-04-01
Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.
Sabouhi, Mahmoud; Bajoghli, Farshad; Abolhasani, Majid
2015-01-01
The success of an implant-supported prosthesis is dependent on the passive fit of its framework fabricated on a precise cast. The aim of this in vitro study was to digitally compare the three-dimensional accuracy of implant impression techniques in partially and completely edentulous conditions. The master model simulated two clinical conditions. The first condition was a partially edentulous mandibular arch with an anterior edentulous space (D condition). Two implant analogs were inserted in bilateral canine sites. After elimination of the teeth, the model was converted to a completely edentulous condition (E condition). Three different impression techniques were performed (open splinted [OS], open unsplinted [OU], closed [C]) for each condition. Six groups of casts (DOS, DOU, DC, EOS, EOU, EC) (n = 8), totaling 48 casts, were made. Two scan bodies were secured onto the master edentulous model and onto each test cast and digitized by an optical scanning system. The related scans were superimposed, and the mean discrepancy for each cast was determined. The statistical analysis showed no significant difference in the accuracy of casts as a function of model status (P = .78, analysis of variance [ANOVA] test), impression technique (P = .57, ANOVA test), or as the combination of both (P = .29, ANOVA test). The distribution of data was normal (Kolmogorov-Smirnov test). Model status (dentate or edentulous) and impression technique did not influence the precision of the casts. There is no difference among any of the impression techniques in either simulated clinical condition.
Fujibayashi, Nobuaki; Otsuka, Mitsuo; Yoshioka, Shinsuke; Isaka, Tadao
2017-10-24
The present study aims to cross-sectionally clarify the characteristics of the motions of an inverted pendulum model, a stance leg, a swing leg and arms in different triple-jumping techniques to understand whether or not hop displacement is relatively longer rather than step and jump displacements. Eighteen male athletes performed the triple jump with a full run-up. Based on the technique of the jumpers, they were classified as hop-dominated (n = 10) or balance (n = 8) jumpers. The kinematic data were calculated using motion capture and compared between the two techniques using the inverted pendulum model. The hop-dominated jumpers had a significantly longer hop displacement and faster vertical centre-of-mass (COM) velocity of their whole body at hop take-off, which was generated by faster rotation behaviours of inverted pendulum model and faster swinging behaviours of arms. Conversely, balance jumpers had a significantly longer jump displacement and faster horizontal COM velocity of their whole body at take-off, which was generated by a stiffer inverted pendulum model and stance leg. The results demonstrate that hop-dominated and balance jumpers enhanced each dominated-jump displacement using different swing- and stance-leg motions. This information may help to enhance the actual displacement of triple jumpers using different jumping techniques.
Modeling software systems by domains
NASA Technical Reports Server (NTRS)
Dippolito, Richard; Lee, Kenneth
1992-01-01
The Software Architectures Engineering (SAE) Project at the Software Engineering Institute (SEI) has developed engineering modeling techniques that both reduce the complexity of software for domain-specific computer systems and result in systems that are easier to build and maintain. These techniques allow maximum freedom for system developers to apply their domain expertise to software. We have applied these techniques to several types of applications, including training simulators operating in real time, engineering simulators operating in non-real time, and real-time embedded computer systems. Our modeling techniques result in software that mirrors both the complexity of the application and the domain knowledge requirements. We submit that the proper measure of software complexity reflects neither the number of software component units nor the code count, but the locus of and amount of domain knowledge. As a result of using these techniques, domain knowledge is isolated by fields of engineering expertise and removed from the concern of the software engineer. In this paper, we will describe kinds of domain expertise, describe engineering by domains, and provide relevant examples of software developed for simulator applications using the techniques.
Model Reduction for Control System Design
NASA Technical Reports Server (NTRS)
Enns, D. F.
1985-01-01
An approach and a technique for effectively obtaining reduced order mathematical models of a given large order model for the purposes of synthesis, analysis and implementation of control systems is developed. This approach involves the use of an error criterion which is the H-infinity norm of a frequency weighted error between the full and reduced order models. The weightings are chosen to take into account the purpose for which the reduced order model is intended. A previously unknown error bound in the H-infinity norm for reduced order models obtained from internally balanced realizations was obtained. This motivated further development of the balancing technique to include the frequency dependent weightings. This resulted in the frequency weighted balanced realization and a new model reduction technique. Two approaches to designing reduced order controllers were developed. The first involves reducing the order of a high order controller with an appropriate weighting. The second involves linear quadratic Gaussian synthesis based on a reduced order model obtained with an appropriate weighting.
User Selection Criteria of Airspace Designs in Flexible Airspace Management
NASA Technical Reports Server (NTRS)
Lee, Hwasoo E.; Lee, Paul U.; Jung, Jaewoo; Lai, Chok Fung
2011-01-01
A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.
NASA Technical Reports Server (NTRS)
Bell, James H.; Burner, Alpheus W.
2004-01-01
As the benefit-to-cost ratio of advanced optical techniques for wind tunnel measurements such as Video Model Deformation (VMD), Pressure-Sensitive Paint (PSP), and others increases, these techniques are being used more and more often in large-scale production type facilities. Further benefits might be achieved if multiple optical techniques could be deployed in a wind tunnel test simultaneously. The present study discusses the problems and benefits of combining VMD and PSP systems. The desirable attributes of useful optical techniques for wind tunnels, including the ability to accommodate the myriad optical techniques available today, are discussed. The VMD and PSP techniques are briefly reviewed. Commonalties and differences between the two techniques are discussed. Recent wind tunnel experiences and problems when combining PSP and VMD are presented, as are suggestions for future developments in combined PSP and deformation measurements.
Kahramangil, Bora; Mohsin, Khuzema; Alzahrani, Hassan; Bu Ali, Daniah; Tausif, Syed; Kang, Sang-Wook; Kandil, Emad; Berber, Eren
2017-12-01
Numerous new approaches have been described over the years to improve the cosmetic outcomes of thyroid surgery. Transoral approach is a new technique that aims to achieve superior cosmetic outcomes by concealing the incision in the oral cavity. Transoral thyroidectomy through vestibular approach was performed in two institutions on cadaveric models. Procedure was performed endoscopically in one institution, while the robotic technique was utilized at the other. Transoral thyroidectomy was successfully performed at both institutions with robotic and endoscopic techniques. All vital structures were identified and preserved. Transoral thyroidectomy has been performed in animal and cadaveric models, as well as in some clinical studies. Our initial experience indicates the feasibility of this approach. More clinical studies are required to elucidate its full utility.
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
Extended charge banking model of dual path shocks for implantable cardioverter defibrillators
Dosdall, Derek J; Sweeney, James D
2008-01-01
Background Single path defibrillation shock methods have been improved through the use of the Charge Banking Model of defibrillation, which predicts the response of the heart to shocks as a simple resistor-capacitor (RC) circuit. While dual path defibrillation configurations have significantly reduced defibrillation thresholds, improvements to dual path defibrillation techniques have been limited to experimental observations without a practical model to aid in improving dual path defibrillation techniques. Methods The Charge Banking Model has been extended into a new Extended Charge Banking Model of defibrillation that represents small sections of the heart as separate RC circuits, uses a weighting factor based on published defibrillation shock field gradient measures, and implements a critical mass criteria to predict the relative efficacy of single and dual path defibrillation shocks. Results The new model reproduced the results from several published experimental protocols that demonstrated the relative efficacy of dual path defibrillation shocks. The model predicts that time between phases or pulses of dual path defibrillation shock configurations should be minimized to maximize shock efficacy. Discussion Through this approach the Extended Charge Banking Model predictions may be used to improve dual path and multi-pulse defibrillation techniques, which have been shown experimentally to lower defibrillation thresholds substantially. The new model may be a useful tool to help in further improving dual path and multiple pulse defibrillation techniques by predicting optimal pulse durations and shock timing parameters. PMID:18673561
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mian, Muhammad Umer, E-mail: umermian@gmail.com; Khir, M. H. Md.; Tang, T. B.
Pre-fabrication, behavioural and performance analysis with computer aided design (CAD) tools is a common and fabrication cost effective practice. In light of this we present a simulation methodology for a dual-mass oscillator based 3 Degree of Freedom (3-DoF) MEMS gyroscope. 3-DoF Gyroscope is modeled through lumped parameter models using equivalent circuit elements. These equivalent circuits consist of elementary components which are counterpart of their respective mechanical components, used to design and fabricate 3-DoF MEMS gyroscope. Complete designing of equivalent circuit model, mathematical modeling and simulation are being presented in this paper. Behaviors of the equivalent lumped models derived for themore » proposed device design are simulated in MEMSPRO T-SPICE software. Simulations are carried out with the design specifications following design rules of the MetalMUMPS fabrication process. Drive mass resonant frequencies simulated by this technique are 1.59 kHz and 2.05 kHz respectively, which are close to the resonant frequencies found by the analytical formulation of the gyroscope. The lumped equivalent circuit modeling technique proved to be a time efficient modeling technique for the analysis of complex MEMS devices like 3-DoF gyroscopes. The technique proves to be an alternative approach to the complex and time consuming couple field analysis Finite Element Analysis (FEA) previously used.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaBelle, S.J.; Smith, A.E.; Seymour, D.A.
1977-02-01
The technique applies equally well to new or existing airports. The importance of accurate accounting of emissions, cannot be overstated. The regional oxidant modelling technique used in conjunction with a balance sheet review must be a proportional reduction technique. This type of emission balancing presumes equality of all sources in the analysis region. The technique can be applied successfully in the highway context, either in planning at the system level or looking only at projects individually. The project-by-project reviews could be used to examine each project in the same way as the airport projects are examined for their impact onmore » regional desired emission levels. The primary limitation of this technique is that it should not be used when simulation models have been used for regional oxidant air quality. In the case of highway projects, the balance sheet technique might appear to be limited; the real limitations are in the transportation planning process. That planning process is not well-suited to the needs of air quality forecasting. If the transportation forecasting techniques are insensitive to change in the variables that affect HC emissions, then no internal emission trade-offs can be identified, and the initial highway emission forecasts are themselves suspect. In general, the balance sheet technique is limited by the quality of the data used in the review. Additionally, the technique does not point out effective trade-off strategies, nor does it indicate when it might be worthwhile to ignore small amounts of excess emissions. Used in the context of regional air quality plans based on proportional reduction models, the balance sheet analysis technique shows promise as a useful method by state or regional reviewing agencies.« less
NASA Astrophysics Data System (ADS)
Wang, Haibo; Swee Poo, Gee
2004-08-01
We study the provisioning of virtual private network (VPN) service over WDM optical networks. For this purpose, we investigate the blocking performance of the hose model versus the pipe model for the provisioning. Two techniques are presented: an analytical queuing model and a discrete event simulation. The queuing model is developed from the multirate reduced-load approximation technique. The simulation is done with the OPNET simulator. Several experimental situations were used. The blocking probabilities calculated from the two approaches show a close match, indicating that the multirate reduced-load approximation technique is capable of predicting the blocking performance for the pipe model and the hose model in WDM networks. A comparison of the blocking behavior of the two models shows that the hose model has superior blocking performance as compared with pipe model. By and large, the blocking probability of the hose model is better than that of the pipe model by a few orders of magnitude, particularly at low load regions. The flexibility of the hose model allowing for the sharing of resources on a link among all connections accounts for its superior performance.
A dependency-based modelling mechanism for problem solving
NASA Technical Reports Server (NTRS)
London, P.
1978-01-01
The paper develops a technique of dependency net modeling which relies on an explicit representation of justifications for beliefs held by the problem solver. Using these justifications, the modeling mechanism is able to determine the relevant lines of inference to pursue during problem solving. Three particular problem-solving difficulties which may be handled by the dependency-based technique are discussed: (1) subgoal violation detection, (2) description binding, and (3) maintaining a consistent world model.
Agent-Based Modeling in Systems Pharmacology.
Cosgrove, J; Butler, J; Alden, K; Read, M; Kumar, V; Cucurull-Sanchez, L; Timmis, J; Coles, M
2015-11-01
Modeling and simulation (M&S) techniques provide a platform for knowledge integration and hypothesis testing to gain insights into biological systems that would not be possible a priori. Agent-based modeling (ABM) is an M&S technique that focuses on describing individual components rather than homogenous populations. This tutorial introduces ABM to systems pharmacologists, using relevant case studies to highlight how ABM-specific strengths have yielded success in the area of preclinical mechanistic modeling.
Orthognathic model surgery with LEGO key-spacer.
Tsang, Alfred Chee-Ching; Lee, Alfred Siu Hong; Li, Wai Keung
2013-12-01
A new technique of model surgery using LEGO plates as key-spacers is described. This technique requires less time to set up compared with the conventional plaster model method. It also retains the preoperative setup with the same set of models. Movement of the segments can be measured and examined in detail with LEGO key-spacers. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Aircraft model prototypes which have specified handling-quality time histories
NASA Technical Reports Server (NTRS)
Johnson, S. H.
1976-01-01
Several techniques for obtaining linear constant-coefficient airplane models from specified handling-quality time histories are discussed. One technique, the pseudodata method, solves the basic problem, yields specified eigenvalues, and accommodates state-variable transfer-function zero suppression. The method is fully illustrated for a fourth-order stability-axis small-motion model with three lateral handling-quality time histories specified. The FORTRAN program which obtains and verifies the model is included and fully documented.
Sivaramakrishnan, Shyam; Rajamani, Rajesh; Johnson, Bruce D
2009-01-01
Respiratory CO(2) measurement (capnography) is an important diagnosis tool that lacks inexpensive and wearable sensors. This paper develops techniques to enable use of inexpensive but slow CO(2) sensors for breath-by-breath tracking of CO(2) concentration. This is achieved by mathematically modeling the dynamic response and using model-inversion techniques to predict input CO(2) concentration from the slow-varying output. Experiments are designed to identify model-dynamics and extract relevant model-parameters for a solidstate room monitoring CO(2) sensor. A second-order model that accounts for flow through the sensor's filter and casing is found to be accurate in describing the sensor's slow response. The resulting estimate is compared with a standard-of-care respiratory CO(2) analyzer and shown to effectively track variation in breath-by-breath CO(2) concentration. This methodology is potentially useful for measuring fast-varying inputs to any slow sensor.
Model-based Clustering of High-Dimensional Data in Astrophysics
NASA Astrophysics Data System (ADS)
Bouveyron, C.
2016-05-01
The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.
Terminology model discovery using natural language processing and visualization techniques.
Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol
2006-12-01
Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.
Correlation techniques to determine model form in robust nonlinear system realization/identification
NASA Technical Reports Server (NTRS)
Stry, Greselda I.; Mook, D. Joseph
1991-01-01
The fundamental challenge in identification of nonlinear dynamic systems is determining the appropriate form of the model. A robust technique is presented which essentially eliminates this problem for many applications. The technique is based on the Minimum Model Error (MME) optimal estimation approach. A detailed literature review is included in which fundamental differences between the current approach and previous work is described. The most significant feature is the ability to identify nonlinear dynamic systems without prior assumption regarding the form of the nonlinearities, in contrast to existing nonlinear identification approaches which usually require detailed assumptions of the nonlinearities. Model form is determined via statistical correlation of the MME optimal state estimates with the MME optimal model error estimates. The example illustrations indicate that the method is robust with respect to prior ignorance of the model, and with respect to measurement noise, measurement frequency, and measurement record length.
Systems modeling and simulation applications for critical care medicine
2012-01-01
Critical care delivery is a complex, expensive, error prone, medical specialty and remains the focal point of major improvement efforts in healthcare delivery. Various modeling and simulation techniques offer unique opportunities to better understand the interactions between clinical physiology and care delivery. The novel insights gained from the systems perspective can then be used to develop and test new treatment strategies and make critical care delivery more efficient and effective. However, modeling and simulation applications in critical care remain underutilized. This article provides an overview of major computer-based simulation techniques as applied to critical care medicine. We provide three application examples of different simulation techniques, including a) pathophysiological model of acute lung injury, b) process modeling of critical care delivery, and c) an agent-based model to study interaction between pathophysiology and healthcare delivery. Finally, we identify certain challenges to, and opportunities for, future research in the area. PMID:22703718
NASA Technical Reports Server (NTRS)
Smith, Suzanne Weaver; Beattie, Christopher A.
1991-01-01
On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.
Knowledge discovery in cardiology: A systematic literature review.
Kadi, I; Idri, A; Fernandez-Aleman, J L
2017-01-01
Data mining (DM) provides the methodology and technology needed to transform huge amounts of data into useful information for decision making. It is a powerful process employed to extract knowledge and discover new patterns embedded in large data sets. Data mining has been increasingly used in medicine, particularly in cardiology. In fact, DM applications can greatly benefit all those involved in cardiology, such as patients, cardiologists and nurses. The purpose of this paper is to review papers concerning the application of DM techniques in cardiology so as to summarize and analyze evidence regarding: (1) the DM techniques most frequently used in cardiology; (2) the performance of DM models in cardiology; (3) comparisons of the performance of different DM models in cardiology. We performed a systematic literature review of empirical studies on the application of DM techniques in cardiology published in the period between 1 January 2000 and 31 December 2015. A total of 149 articles published between 2000 and 2015 were selected, studied and analyzed according to the following criteria: DM techniques and performance of the approaches developed. The results obtained showed that a significant number of the studies selected used classification and prediction techniques when developing DM models. Neural networks, decision trees and support vector machines were identified as being the techniques most frequently employed when developing DM models in cardiology. Moreover, neural networks and support vector machines achieved the highest accuracy rates and were proved to be more efficient than other techniques. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Interface projection techniques for fluid-structure interaction modeling with moving-mesh methods
NASA Astrophysics Data System (ADS)
Tezduyar, Tayfun E.; Sathe, Sunil; Pausewang, Jason; Schwaab, Matthew; Christopher, Jason; Crabtree, Jason
2008-12-01
The stabilized space-time fluid-structure interaction (SSTFSI) technique developed by the Team for Advanced Flow Simulation and Modeling (T★AFSM) was applied to a number of 3D examples, including arterial fluid mechanics and parachute aerodynamics. Here we focus on the interface projection techniques that were developed as supplementary methods targeting the computational challenges associated with the geometric complexities of the fluid-structure interface. Although these supplementary techniques were developed in conjunction with the SSTFSI method and in the context of air-fabric interactions, they can also be used in conjunction with other moving-mesh methods, such as the Arbitrary Lagrangian-Eulerian (ALE) method, and in the context of other classes of FSI applications. The supplementary techniques currently consist of using split nodal values for pressure at the edges of the fabric and incompatible meshes at the air-fabric interfaces, the FSI Geometric Smoothing Technique (FSI-GST), and the Homogenized Modeling of Geometric Porosity (HMGP). Using split nodal values for pressure at the edges and incompatible meshes at the interfaces stabilizes the structural response at the edges of the membrane used in modeling the fabric. With the FSI-GST, the fluid mechanics mesh is sheltered from the consequences of the geometric complexity of the structure. With the HMGP, we bypass the intractable complexities of the geometric porosity by approximating it with an “equivalent”, locally-varying fabric porosity. As test cases demonstrating how the interface projection techniques work, we compute the air-fabric interactions of windsocks, sails and ringsail parachutes.
Practical Formal Verification of Diagnosability of Large Models via Symbolic Model Checking
NASA Technical Reports Server (NTRS)
Cavada, Roberto; Pecheur, Charles
2003-01-01
This document reports on the activities carried out during a four-week visit of Roberto Cavada at the NASA Ames Research Center. The main goal was to test the practical applicability of the framework proposed, where a diagnosability problem is reduced to a Symbolic Model Checking problem. Section 2 contains a brief explanation of major techniques currently used in Symbolic Model Checking, and how these techniques can be tuned in order to obtain good performances when using Model Checking tools. Diagnosability is performed on large and structured models of real plants. Section 3 describes how these plants are modeled, and how models can be simplified to improve the performance of Symbolic Model Checkers. Section 4 reports scalability results. Three test cases are briefly presented, and several parameters and techniques have been applied on those test cases in order to produce comparison tables. Furthermore, comparison between several Model Checkers is reported. Section 5 summarizes the application of diagnosability verification to a real application. Several properties have been tested, and results have been highlighted. Finally, section 6 draws some conclusions, and outlines future lines of research.
System equivalent model mixing
NASA Astrophysics Data System (ADS)
Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis
2018-05-01
This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.
MacLean, Adam L; Harrington, Heather A; Stumpf, Michael P H; Byrne, Helen M
2016-01-01
The last decade has seen an explosion in models that describe phenomena in systems medicine. Such models are especially useful for studying signaling pathways, such as the Wnt pathway. In this chapter we use the Wnt pathway to showcase current mathematical and statistical techniques that enable modelers to gain insight into (models of) gene regulation and generate testable predictions. We introduce a range of modeling frameworks, but focus on ordinary differential equation (ODE) models since they remain the most widely used approach in systems biology and medicine and continue to offer great potential. We present methods for the analysis of a single model, comprising applications of standard dynamical systems approaches such as nondimensionalization, steady state, asymptotic and sensitivity analysis, and more recent statistical and algebraic approaches to compare models with data. We present parameter estimation and model comparison techniques, focusing on Bayesian analysis and coplanarity via algebraic geometry. Our intention is that this (non-exhaustive) review may serve as a useful starting point for the analysis of models in systems medicine.
Variable Complexity Optimization of Composite Structures
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
2002-01-01
The use of several levels of modeling in design has been dubbed variable complexity modeling. The work under the grant focused on developing variable complexity modeling strategies with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and design against uncertainty using response surface techniques.
Saxton, Michael J
2007-01-01
Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.
Teaching Tip: Using Activity Diagrams to Model Systems Analysis Techniques: Teaching What We Preach
ERIC Educational Resources Information Center
Lending, Diane; May, Jeffrey
2013-01-01
Activity diagrams are used in Systems Analysis and Design classes as a visual tool to model the business processes of "as-is" and "to-be" systems. This paper presents the idea of using these same activity diagrams in the classroom to model the actual processes (practices and techniques) of Systems Analysis and Design. This tip…
1991-03-14
analyses of labeled model compounds, such as the protein, RuBisCO (Ribulose Bisphosphate CarboxylaselOxygenase). (3) Examine adsorption of...coverages were determined radiometrically using tritiated RuBisCO . Although natural thin films were detectable using Raman scattering techniques...optical, electrochemical, and radiometric techniques and the protein RuBisCO as a model adsorbate on titanium, copper, and iron, we have been able to
Khajouei, Hamid; Khajouei, Reza
2017-12-01
Appropriate knowledge, correct information, and relevant data are vital in medical diagnosis and treatment systems. Knowledge Management (KM) through its tools/techniques provides a pertinent framework for decision-making in healthcare systems. The objective of this study was to identify and prioritize the KM tools/techniques that apply to hospital setting. This is a descriptive-survey study. Data were collected using a -researcher-made questionnaire that was developed based on experts' opinions to select the appropriate tools/techniques from 26 tools/techniques of the Asian Productivity Organization (APO) model. Questions were categorized into five steps of KM (identifying, creating, storing, sharing, and applying the knowledge) according to this model. The study population consisted of middle and senior managers of hospitals and managing directors of Vice-Chancellor for Curative Affairs in Kerman University of Medical Sciences in Kerman, Iran. The data were analyzed in SPSS v.19 using one-sample t-test. Twelve out of 26 tools/techniques of the APO model were identified as the tools applicable in hospitals. "Knowledge café" and "APO knowledge management assessment tool" with respective means of 4.23 and 3.7 were the most and the least applicable tools in the knowledge identification step. "Mentor-mentee scheme", as well as "voice and Voice over Internet Protocol (VOIP)" with respective means of 4.20 and 3.52 were the most and the least applicable tools/techniques in the knowledge creation step. "Knowledge café" and "voice and VOIP" with respective means of 3.85 and 3.42 were the most and the least applicable tools/techniques in the knowledge storage step. "Peer assist and 'voice and VOIP' with respective means of 4.14 and 3.38 were the most and the least applicable tools/techniques in the knowledge sharing step. Finally, "knowledge worker competency plan" and "knowledge portal" with respective means of 4.38 and 3.85 were the most and the least applicable tools/techniques in the knowledge application step. The results showed that 12 out of 26 tools in the APO model are appropriate for hospitals of which 11 are significantly applicable, and "storytelling" is marginally applicable. In this study, the preferred tools/techniques for implementation of each of the five KM steps in hospitals are introduced. Copyright © 2017 Elsevier B.V. All rights reserved.
Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.
2010-01-01
The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808
Development of an Intelligent Videogrammetric Wind Tunnel Measurement System
NASA Technical Reports Server (NTRS)
Graves, Sharon S.; Burner, Alpheus W.
2004-01-01
A videogrammetric technique developed at NASA Langley Research Center has been used at five NASA facilities at the Langley and Ames Research Centers for deformation measurements on a number of sting mounted and semispan models. These include high-speed research and transport models tested over a wide range of aerodynamic conditions including subsonic, transonic, and supersonic regimes. The technique, based on digital photogrammetry, has been used to measure model attitude, deformation, and sting bending. In addition, the technique has been used to study model injection rate effects and to calibrate and validate methods for predicting static aeroelastic deformations of wind tunnel models. An effort is currently underway to develop an intelligent videogrammetric measurement system that will be both useful and usable in large production wind tunnels while providing accurate data in a robust and timely manner. Designed to encode a higher degree of knowledge through computer vision, the system features advanced pattern recognition techniques to improve automated location and identification of targets placed on the wind tunnel model to be used for aerodynamic measurements such as attitude and deformation. This paper will describe the development and strategy of the new intelligent system that was used in a recent test at a large transonic wind tunnel.
A technique for evaluating the application of the pin-level stuck-at fault model to VLSI circuits
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Finelli, George B.
1987-01-01
Accurate fault models are required to conduct the experiments defined in validation methodologies for highly reliable fault-tolerant computers (e.g., computers with a probability of failure of 10 to the -9 for a 10-hour mission). Described is a technique by which a researcher can evaluate the capability of the pin-level stuck-at fault model to simulate true error behavior symptoms in very large scale integrated (VLSI) digital circuits. The technique is based on a statistical comparison of the error behavior resulting from faults applied at the pin-level of and internal to a VLSI circuit. As an example of an application of the technique, the error behavior of a microprocessor simulation subjected to internal stuck-at faults is compared with the error behavior which results from pin-level stuck-at faults. The error behavior is characterized by the time between errors and the duration of errors. Based on this example data, the pin-level stuck-at fault model is found to deliver less than ideal performance. However, with respect to the class of faults which cause a system crash, the pin-level, stuck-at fault model is found to provide a good modeling capability.
An Approach to the Evaluation of Hypermedia.
ERIC Educational Resources Information Center
Knussen, Christina; And Others
1991-01-01
Discusses methods that may be applied to the evaluation of hypermedia, based on six models described by Lawton. Techniques described include observation, self-report measures, interviews, automated measures, psychometric tests, checklists and criterion-based techniques, process models, Experimentally Measuring Usability (EMU), and a naturalistic…
40 CFR 68.28 - Alternative release scenario analysis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... overfilling and spill, or overpressurization and venting through relief valves or rupture disks; and (v... Consequence Analysis Guidance or any commercially or publicly available air dispersion modeling techniques, provided the techniques account for the specified modeling conditions and are recognized by industry as...
New Results in Software Model Checking and Analysis
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.
2010-01-01
This introductory article surveys new techniques, supported by automated tools, for the analysis of software to ensure reliability and safety. Special focus is on model checking techniques. The article also introduces the five papers that are enclosed in this special journal volume.
Confidence Intervals from Realizations of Simulated Nuclear Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younes, W.; Ratkiewicz, A.; Ressler, J. J.
2017-09-28
Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.
Chronology of DIC technique based on the fundamental mathematical modeling and dehydration impact.
Alias, Norma; Saipol, Hafizah Farhah Saipan; Ghani, Asnida Che Abd
2014-12-01
A chronology of mathematical models for heat and mass transfer equation is proposed for the prediction of moisture and temperature behavior during drying using DIC (Détente Instantanée Contrôlée) or instant controlled pressure drop technique. DIC technique has the potential as most commonly used dehydration method for high impact food value including the nutrition maintenance and the best possible quality for food storage. The model is governed by the regression model, followed by 2D Fick's and Fourier's parabolic equation and 2D elliptic-parabolic equation in a rectangular slice. The models neglect the effect of shrinkage and radiation effects. The simulations of heat and mass transfer equations with parabolic and elliptic-parabolic types through some numerical methods based on finite difference method (FDM) have been illustrated. Intel®Core™2Duo processors with Linux operating system and C programming language have been considered as a computational platform for the simulation. Qualitative and quantitative differences between DIC technique and the conventional drying methods have been shown as a comparative.
Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems
NASA Technical Reports Server (NTRS)
Koch, Patrick N.
1997-01-01
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.
The development of an audit technique to assess the quality of safety barrier management.
Guldenmund, Frank; Hale, Andrew; Goossens, Louis; Betten, Jeroen; Duijm, Nijs Jan
2006-03-31
This paper describes the development of a management model to control barriers devised to prevent major hazard scenarios. Additionally, an audit technique is explained that assesses the quality of such a management system. The final purpose of the audit technique is to quantify those aspects of the management system that have a direct impact on the reliability and effectiveness of the barriers and, hence, the probability of the scenarios involved. First, an outline of the management model is given and its elements are explained. Then, the development of the audit technique is described. Because the audit technique uses actual major hazard scenarios and barriers within these as its focus, the technique achieves a concreteness and clarity that many other techniques often lack. However, this strength is also its limitation, since the full safety management system is not covered with the technique. Finally, some preliminary experiences obtained from several test sites are compiled and discussed.
Determining Kinetic Parameters for Isothermal Crystallization of Glasses
NASA Technical Reports Server (NTRS)
Ray, C. S.; Zhang, T.; Reis, S. T.; Brow, R. K.
2006-01-01
Non-isothermal crystallization techniques are frequently used to determine the kinetic parameters for crystallization in glasses. These techniques are experimentally simple and quick compared to the isothermal techniques. However, the analytical models used for non-isothermal data analysis, originally developed for describing isothermal transformation kinetics, are fundamentally flawed. The present paper describes a technique for determining the kinetic parameters for isothermal crystallization in glasses, which eliminates most of the common problems that generally make the studies of isothermal crystallization laborious and time consuming. In this technique, the volume fraction of glass that is crystallized as a function of time during an isothermal hold was determined using differential thermal analysis (DTA). The crystallization parameters for the lithium-disilicate (Li2O.2SiO2) model glass were first determined and compared to the same parameters determined by other techniques to establish the accuracy and usefulness of the present technique. This technique was then used to describe the crystallization kinetics of a complex Ca-Sr-Zn-silicate glass developed for sealing solid oxide fuel cells.
EXPERIMENTAL MODELLING OF AORTIC ANEURYSMS
Doyle, Barry J; Corbett, Timothy J; Cloonan, Aidan J; O’Donnell, Michael R; Walsh, Michael T; Vorp, David A; McGloughlin, Timothy M
2009-01-01
A range of silicone rubbers were created based on existing commercially available materials. These silicones were designed to be visually different from one another and have distinct material properties, in particular, ultimate tensile strengths and tear strengths. In total, eleven silicone rubbers were manufactured, with the materials designed to have a range of increasing tensile strengths from approximately 2-4MPa, and increasing tear strengths from approximately 0.45-0.7N/mm. The variations in silicones were detected using a standard colour analysis technique. Calibration curves were then created relating colour intensity to individual material properties. All eleven materials were characterised and a 1st order Ogden strain energy function applied. Material coefficients were determined and examined for effectiveness. Six idealised abdominal aortic aneurysm models were also created using the two base materials of the study, with a further model created using a new mixing technique to create a rubber model with randomly assigned material properties. These models were then examined using videoextensometry and compared to numerical results. Colour analysis revealed a statistically significant linear relationship (p<0.0009) with both tensile strength and tear strength, allowing material strength to be determined using a non-destructive experimental technique. The effectiveness of this technique was assessed by comparing predicted material properties to experimentally measured methods, with good agreement in the results. Videoextensometry and numerical modelling revealed minor percentage differences, with all results achieving significance (p<0.0009). This study has successfully designed and developed a range of silicone rubbers that have unique colour intensities and material strengths. Strengths can be readily determined using a non-destructive analysis technique with proven effectiveness. These silicones may further aid towards an improved understanding of the biomechanical behaviour of aneurysms using experimental techniques. PMID:19595622
Ullattuthodi, Sujana; Cherian, Kandathil Phillip; Anandkumar, R; Nambiar, M Sreedevi
2017-01-01
This in vitro study seeks to evaluate and compare the marginal and internal fit of cobalt-chromium copings fabricated using the conventional and direct metal laser sintering (DMLS) techniques. A master model of a prepared molar tooth was made using cobalt-chromium alloy. Silicone impression of the master model was made and thirty standardized working models were then produced; twenty working models for conventional lost-wax technique and ten working models for DMLS technique. A total of twenty metal copings were fabricated using two different production techniques: conventional lost-wax method and DMLS; ten samples in each group. The conventional and DMLS copings were cemented to the working models using glass ionomer cement. Marginal gap of the copings were measured at predetermined four points. The die with the cemented copings are standardized-sectioned with a heavy duty lathe. Then, each sectioned samples were analyzed for the internal gap between the die and the metal coping using a metallurgical microscope. Digital photographs were taken at ×50 magnification and analyzed using measurement software. Statistical analysis was done by unpaired t -test and analysis of variance (ANOVA). The results of this study reveal that no significant difference was present in the marginal gap of conventional and DMLS copings ( P > 0.05) by means of ANOVA. The mean values of internal gap of DMLS copings were significantly greater than that of conventional copings ( P < 0.05). Within the limitations of this in vitro study, it was concluded that the internal fit of conventional copings was superior to that of the DMLS copings. Marginal fit of the copings fabricated by two different techniques had no significant difference.
Sakr, Sherif; Elshawi, Radwa; Ahmed, Amjad M; Qureshi, Waqas T; Brawner, Clinton A; Keteyian, Steven J; Blaha, Michael J; Al-Mallah, Mouaz H
2017-12-19
Prior studies have demonstrated that cardiorespiratory fitness (CRF) is a strong marker of cardiovascular health. Machine learning (ML) can enhance the prediction of outcomes through classification techniques that classify the data into predetermined categories. The aim of this study is to present an evaluation and comparison of how machine learning techniques can be applied on medical records of cardiorespiratory fitness and how the various techniques differ in terms of capabilities of predicting medical outcomes (e.g. mortality). We use data of 34,212 patients free of known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems Between 1991 and 2009 and had a complete 10-year follow-up. Seven machine learning classification techniques were evaluated: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN) and Random Forest (RF). In order to handle the imbalanced dataset used, the Synthetic Minority Over-Sampling Technique (SMOTE) is used. Two set of experiments have been conducted with and without the SMOTE sampling technique. On average over different evaluation metrics, SVM Classifier has shown the lowest performance while other models like BN, BC and DT performed better. The RF classifier has shown the best performance (AUC = 0.97) among all models trained using the SMOTE sampling. The results show that various ML techniques can significantly vary in terms of its performance for the different evaluation metrics. It is also not necessarily that the more complex the ML model, the more prediction accuracy can be achieved. The prediction performance of all models trained with SMOTE is much better than the performance of models trained without SMOTE. The study shows the potential of machine learning methods for predicting all-cause mortality using cardiorespiratory fitness data.
Material model validation for laser shock peening process simulation
NASA Astrophysics Data System (ADS)
Amarchinta, H. K.; Grandhi, R. V.; Langer, K.; Stargel, D. S.
2009-01-01
Advanced mechanical surface enhancement techniques have been used successfully to increase the fatigue life of metallic components. These techniques impart deep compressive residual stresses into the component to counter potentially damage-inducing tensile stresses generated under service loading. Laser shock peening (LSP) is an advanced mechanical surface enhancement technique used predominantly in the aircraft industry. To reduce costs and make the technique available on a large-scale basis for industrial applications, simulation of the LSP process is required. Accurate simulation of the LSP process is a challenging task, because the process has many parameters such as laser spot size, pressure profile and material model that must be precisely determined. This work focuses on investigating the appropriate material model that could be used in simulation and design. In the LSP process material is subjected to strain rates of 106 s-1, which is very high compared with conventional strain rates. The importance of an accurate material model increases because the material behaves significantly different at such high strain rates. This work investigates the effect of multiple nonlinear material models for representing the elastic-plastic behavior of materials. Elastic perfectly plastic, Johnson-Cook and Zerilli-Armstrong models are used, and the performance of each model is compared with available experimental results.
Cockpit System Situational Awareness Modeling Tool
NASA Technical Reports Server (NTRS)
Keller, John; Lebiere, Christian; Shay, Rick; Latorella, Kara
2004-01-01
This project explored the possibility of predicting pilot situational awareness (SA) using human performance modeling techniques for the purpose of evaluating developing cockpit systems. The Improved Performance Research Integration Tool (IMPRINT) was combined with the Adaptive Control of Thought-Rational (ACT-R) cognitive modeling architecture to produce a tool that can model both the discrete tasks of pilots and the cognitive processes associated with SA. The techniques for using this tool to predict SA were demonstrated using the newly developed Aviation Weather Information (AWIN) system. By providing an SA prediction tool to cockpit system designers, cockpit concepts can be assessed early in the design process while providing a cost-effective complement to the traditional pilot-in-the-loop experiments and data collection techniques.
Automatic welding detection by an intelligent tool pipe inspection
NASA Astrophysics Data System (ADS)
Arizmendi, C. J.; Garcia, W. L.; Quintero, M. A.
2015-07-01
This work provide a model based on machine learning techniques in welds recognition, based on signals obtained through in-line inspection tool called “smart pig” in Oil and Gas pipelines. The model uses a signal noise reduction phase by means of pre-processing algorithms and attribute-selection techniques. The noise reduction techniques were selected after a literature review and testing with survey data. Subsequently, the model was trained using recognition and classification algorithms, specifically artificial neural networks and support vector machines. Finally, the trained model was validated with different data sets and the performance was measured with cross validation and ROC analysis. The results show that is possible to identify welding automatically with an efficiency between 90 and 98 percent.
Al-Asadi, H A; Al-Mansoori, M H; Ajiya, M; Hitam, S; Saripan, M I; Mahdi, M A
2010-10-11
We develop a theoretical model that can be used to predict stimulated Brillouin scattering (SBS) threshold in optical fibers that arises through the effect of Brillouin pump recycling technique. Obtained simulation results from our model are in close agreement with our experimental results. The developed model utilizes single mode optical fiber of different lengths as the Brillouin gain media. For 5-km long single mode fiber, the calculated threshold power for SBS is about 16 mW for conventional technique. This value is reduced to about 8 mW when the residual Brillouin pump is recycled at the end of the fiber. The decrement of SBS threshold is due to longer interaction lengths between Brillouin pump and Stokes wave.
Fechter, Dominik; Storch, Ilse
2014-01-01
Due to legislative protection, many species, including large carnivores, are currently recolonizing Europe. To address the impending human-wildlife conflicts in advance, predictive habitat models can be used to determine potentially suitable habitat and areas likely to be recolonized. As field data are often limited, quantitative rule based models or the extrapolation of results from other studies are often the techniques of choice. Using the wolf (Canis lupus) in Germany as a model for habitat generalists, we developed a habitat model based on the location and extent of twelve existing wolf home ranges in Eastern Germany, current knowledge on wolf biology, different habitat modeling techniques and various input data to analyze ten different input parameter sets and address the following questions: (1) How do a priori assumptions and different input data or habitat modeling techniques affect the abundance and distribution of potentially suitable wolf habitat and the number of wolf packs in Germany? (2) In a synthesis across input parameter sets, what areas are predicted to be most suitable? (3) Are existing wolf pack home ranges in Eastern Germany consistent with current knowledge on wolf biology and habitat relationships? Our results indicate that depending on which assumptions on habitat relationships are applied in the model and which modeling techniques are chosen, the amount of potentially suitable habitat estimated varies greatly. Depending on a priori assumptions, Germany could accommodate between 154 and 1769 wolf packs. The locations of the existing wolf pack home ranges in Eastern Germany indicate that wolves are able to adapt to areas densely populated by humans, but are limited to areas with low road densities. Our analysis suggests that predictive habitat maps in general, should be interpreted with caution and illustrates the risk for habitat modelers to concentrate on only one selection of habitat factors or modeling technique. PMID:25029506
GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique
NASA Astrophysics Data System (ADS)
Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.
2015-12-01
Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).
Machine Learning Techniques for Global Sensitivity Analysis in Climate Models
NASA Astrophysics Data System (ADS)
Safta, C.; Sargsyan, K.; Ricciuto, D. M.
2017-12-01
Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.
The GOSSIP on the MCV V347 Pavonis
NASA Astrophysics Data System (ADS)
Potter, S. B.; Cropper, Mark; Hakala, P. J.
Modelling of the polarized cyclotron emission from magnetic cataclysmic variables (MCVs) has been a powerful technique for determining the structure of the accretion zones on the white dwarf. Until now, this has been achieved by constructing emission regions (for example arcs and spots) put in by hand, in order to recover the polarized emission. These models were all inferred indirectly from arguments based on polarization and X-ray light curves. Potter, Hakala & Cropper (1998) presented a technique (Stokes imaging) which objectively and analytically models the polarized emission to recover the structure of the cyclotron emission region(s) in MCVs. We demonstrate this technique with the aid of a test case, then we apply the technique to polarimetric observations of the AM Her system V347 Pav. As the system parameters of V347 Pav (for example its inclination) have not been well determined, we describe an extension to the Stokes imaging technique which also searches the system parameter space (GOSSIP).
NASA Astrophysics Data System (ADS)
Drapeau, L.; Mangiarotti, S.; Le Jean, F.; Gascoin, S.; Jarlan, L.
2014-12-01
The global modeling technique provides a way to obtain ordinary differential equations from single time series1. This technique, initiated in the 1990s, could be applied successfully to numerous theoretic and experimental systems. More recently it could be applied to environmental systems2,3. Here this technique is applied to seasonal snow cover area in the Pyrenees mountain (Europe) and Mont Lebanon (Mediterranean region). The snowpack evolution is complex because it results from combination of processes driven by physiography (elevation, slope, land cover...) and meteorological variables (precipitation, temperature, wind speed...), which are highly heterogeneous in such regions. Satellite observations in visible bands offer a powerful tool to monitor snow cover areas at global scale, with large resolutions range. Although this observable does not directly inform about snow water equivalent, its dynamical behavior strongly relies on it. Therefore, snow cover area is likely to be a good proxy of the global dynamics and global modeling technique a well adapted approach. The MOD10A2 product (500m) generated from MODIS by the NASA is used after a pretreatment is applied to minimize clouds effect. The global modeling technique is then applied using two packages4,5. The analysis is performed with two time series for the whole period (2000-2012) and year by year. Low-dimensional chaotic models are obtained in many cases. Such models provide a strong argument for chaos since involving the two necessary conditions in a synthetic way: determinism and strong sensitivity to initial conditions. The models comparison suggests important non-stationnarities at interannual scale which prevent from detecting long term changes. 1: Letellier et al 2009. Frequently asked questions about global modeling, Chaos, 19, 023103. 2: Maquet et al 2007. Global models from the Canadian lynx cycles as a direct evidence for chaos in real ecosystems. J. of Mathematical Biology, 55 (1), 21-39 3: Mangiarotti et al 2014. Two chaotic global models for cereal crops cycles observed from satellite in Northern Morocco. Chaos, 24, 023130. 4 : Mangiarotti et al 2012. Polynomial search and Global modelling: two algorithms for modeling chaos. Physical Review E, 86(4), 046205. 5: http://cran.r-project.org/web/packages/PoMoS/index.html.
Yaguchi, A; Nagase, K; Ishikawa, M; Iwasaka, T; Odagaki, M; Hosaka, H
2006-01-01
Computer simulation and myocardial cell models were used to evaluate a low-energy defibrillation technique. A generated spiral wave, considered to be a mechanism of fibrillation, and fibrillation were investigated using two myocardial sheet models: a two-dimensional computer simulation model and a two-dimensional experimental model. A new defibrillation technique that has few side effects, which are induced by the current passing into the patient's body, on cardiac muscle is desired. The purpose of the present study is to conduct a basic investigation into an efficient defibrillation method. In order to evaluate the defibrillation method, the propagation of excitation in the myocardial sheet is measured during the normal state and during fibrillation, respectively. The advantages of the low-energy defibrillation technique are then discussed based on the stimulation timing.
NASA Technical Reports Server (NTRS)
Yam, Yeung; Johnson, Timothy L.; Lang, Jeffrey H.
1987-01-01
A model reduction technique based on aggregation with respect to sensor and actuator influence functions rather than modes is presented for large systems of coupled second-order differential equations. Perturbation expressions which can predict the effects of spillover on both the reduced-order plant model and the neglected plant model are derived. For the special case of collocated actuators and sensors, these expressions lead to the derivation of constraints on the controller gains that are, given the validity of the perturbation technique, sufficient to guarantee the stability of the closed-loop system. A case study demonstrates the derivation of stabilizing controllers based on the present technique. The use of control and observation synthesis in modifying the dimension of the reduced-order plant model is also discussed. A numerical example is provided for illustration.
Advanced Atmospheric Ensemble Modeling Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, R.; Chiswell, S.; Kurzeja, R.
Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two releasemore » times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.« less
NASA Technical Reports Server (NTRS)
Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.
1992-01-01
A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.
Streamflow characterization using functional data analysis of the Potomac River
NASA Astrophysics Data System (ADS)
Zelmanow, A.; Maslova, I.; Ticlavilca, A. M.; McKee, M.
2013-12-01
Flooding and droughts are extreme hydrological events that affect the United States economically and socially. The severity and unpredictability of flooding has caused billions of dollars in damage and the loss of lives in the eastern United States. In this context, there is an urgent need to build a firm scientific basis for adaptation by developing and applying new modeling techniques for accurate streamflow characterization and reliable hydrological forecasting. The goal of this analysis is to use numerical streamflow characteristics in order to classify, model, and estimate the likelihood of extreme events in the eastern United States, mainly the Potomac River. Functional data analysis techniques are used to study yearly streamflow patterns, with the extreme streamflow events characterized via functional principal component analysis. These methods are merged with more classical techniques such as cluster analysis, classification analysis, and time series modeling. The developed functional data analysis approach is used to model continuous streamflow hydrographs. The forecasting potential of this technique is explored by incorporating climate factors to produce a yearly streamflow outlook.
Aeroelastic Deformation Measurements of Flap, Gap, and Overhang on a Semispan Model
NASA Technical Reports Server (NTRS)
Burner, A. W.; Liu, Tianshu; Garg, Sanjay; Ghee, Terence A.; Taylor, Nigel J.
2000-01-01
Single-camera, single-view videogrammetry has been used to determine static aeroelastic deformation of a slotted flap configuration on a semispan model at the National Transonic Facility (NTF). Deformation was determined by comparing wind-off to wind-on spatial data from targets placed on the main element, shroud, and flap of the model. Digitized video images from a camera were recorded and processed to automatically determine target image plane locations that were then corrected for sensor, lens, and frame grabber spatial errors. The videogrammetric technique has been established at NASA facilities as the technique of choice when high-volume static aeroelastic data with minimum impact on data taking is required. The primary measurement at the NTF with this technique in the past has been the measurement of static aeroelastic wing twist on full span models. The first results using the videogrammetric technique for the measurement of component deformation during semispan testing at the NTF are presented.
Link-prediction to tackle the boundary specification problem in social network surveys
De Wilde, Philippe; Buarque de Lima-Neto, Fernando
2017-01-01
Diffusion processes in social networks often cause the emergence of global phenomena from individual behavior within a society. The study of those global phenomena and the simulation of those diffusion processes frequently require a good model of the global network. However, survey data and data from online sources are often restricted to single social groups or features, such as age groups, single schools, companies, or interest groups. Hence, a modeling approach is required that extrapolates the locally restricted data to a global network model. We tackle this Missing Data Problem using Link-Prediction techniques from social network research, network generation techniques from the area of Social Simulation, as well as a combination of both. We found that techniques employing less information may be more adequate to solve this problem, especially when data granularity is an issue. We validated the network models created with our techniques on a number of real-world networks, investigating degree distributions as well as the likelihood of links given the geographical distance between two nodes. PMID:28426826
Advanced 3D Characterization and Reconstruction of Reactor Materials FY16 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fromm, Bradley; Hauch, Benjamin; Sridharan, Kumar
2016-12-01
A coordinated effort to link advanced materials characterization methods and computational modeling approaches is critical to future success for understanding and predicting the behavior of reactor materials that operate at extreme conditions. The difficulty and expense of working with nuclear materials have inhibited the use of modern characterization techniques on this class of materials. Likewise, mesoscale simulation efforts have been impeded due to insufficient experimental data necessary for initialization and validation of the computer models. The objective of this research is to develop methods to integrate advanced materials characterization techniques developed for reactor materials with state-of-the-art mesoscale modeling and simulationmore » tools. Research to develop broad-ion beam sample preparation, high-resolution electron backscatter diffraction, and digital microstructure reconstruction techniques; and methods for integration of these techniques into mesoscale modeling tools are detailed. Results for both irradiated and un-irradiated reactor materials are presented for FY14 - FY16 and final remarks are provided.« less
Foster, Katherine T; Beltz, Adriene M
2018-08-01
Ambulatory assessment (AA) methodologies have the potential to increase understanding and treatment of addictive behavior in seemingly unprecedented ways, due in part, to their emphasis on intensive repeated assessments of an individual's addictive behavior in context. But, many analytic techniques traditionally applied to AA data - techniques that average across people and time - do not fully leverage this potential. In an effort to take advantage of the individualized, temporal nature of AA data on addictive behavior, the current paper considers three underutilized person-oriented analytic techniques: multilevel modeling, p-technique, and group iterative multiple model estimation. After reviewing prevailing analytic techniques, each person-oriented technique is presented, AA data specifications are mentioned, an example analysis using generated data is provided, and advantages and limitations are discussed; the paper closes with a brief comparison across techniques. Increasing use of person-oriented techniques will substantially enhance inferences that can be drawn from AA data on addictive behavior and has implications for the development of individualized interventions. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Lucifredi, A.; Mazzieri, C.; Rossi, M.
2000-05-01
Since the operational conditions of a hydroelectric unit can vary within a wide range, the monitoring system must be able to distinguish between the variations of the monitored variable caused by variations of the operation conditions and those due to arising and progressing of failures and misoperations. The paper aims to identify the best technique to be adopted for the monitoring system. Three different methods have been implemented and compared. Two of them use statistical techniques: the first, the linear multiple regression, expresses the monitored variable as a linear function of the process parameters (independent variables), while the second, the dynamic kriging technique, is a modified technique of multiple linear regression representing the monitored variable as a linear combination of the process variables in such a way as to minimize the variance of the estimate error. The third is based on neural networks. Tests have shown that the monitoring system based on the kriging technique is not affected by some problems common to the other two models e.g. the requirement of a large amount of data for their tuning, both for training the neural network and defining the optimum plane for the multiple regression, not only in the system starting phase but also after a trivial operation of maintenance involving the substitution of machinery components having a direct impact on the observed variable. Or, in addition, the necessity of different models to describe in a satisfactory way the different ranges of operation of the plant. The monitoring system based on the kriging statistical technique overrides the previous difficulties: it does not require a large amount of data to be tuned and is immediately operational: given two points, the third can be immediately estimated; in addition the model follows the system without adapting itself to it. The results of the experimentation performed seem to indicate that a model based on a neural network or on a linear multiple regression is not optimal, and that a different approach is necessary to reduce the amount of work during the learning phase using, when available, all the information stored during the initial phase of the plant to build the reference baseline, elaborating, if it is the case, the raw information available. A mixed approach using the kriging statistical technique and neural network techniques could optimise the result.
Optimisation of phase ratio in the triple jump using computer simulation.
Allen, Sam J; King, Mark A; Yeadon, M R Fred
2016-04-01
The triple jump is an athletic event comprising three phases in which the optimal proportion of each phase to the total distance jumped, termed the phase ratio, is unknown. This study used a whole-body torque-driven computer simulation model of all three phases of the triple jump to investigate optimal technique. The technique of the simulation model was optimised by varying torque generator activation parameters using a Genetic Algorithm in order to maximise total jump distance, resulting in a hop-dominated technique (35.7%:30.8%:33.6%) and a distance of 14.05m. Optimisations were then run with penalties forcing the model to adopt hop and jump phases of 33%, 34%, 35%, 36%, and 37% of the optimised distance, resulting in total distances of: 13.79m, 13.87m, 13.95m, 14.05m, and 14.02m; and 14.01m, 14.02m, 13.97m, 13.84m, and 13.67m respectively. These results indicate that in this subject-specific case there is a plateau in optimum technique encompassing balanced and hop-dominated techniques, but that a jump-dominated technique is associated with a decrease in performance. Hop-dominated techniques are associated with higher forces than jump-dominated techniques; therefore optimal phase ratio may be related to a combination of strength and approach velocity. Copyright © 2016 Elsevier B.V. All rights reserved.
Agent-based modeling: case study in cleavage furrow models
Mogilner, Alex; Manhart, Angelika
2016-01-01
The number of studies in cell biology in which quantitative models accompany experiments has been growing steadily. Roughly, mathematical and computational techniques of these models can be classified as “differential equation based” (DE) or “agent based” (AB). Recently AB models have started to outnumber DE models, but understanding of AB philosophy and methodology is much less widespread than familiarity with DE techniques. Here we use the history of modeling a fundamental biological problem—positioning of the cleavage furrow in dividing cells—to explain how and why DE and AB models are used. We discuss differences, advantages, and shortcomings of these two approaches. PMID:27811328
Accurate lithography simulation model based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki
2017-07-01
Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.
Change Detection Analysis of Water Pollution in Coimbatore Region using Different Color Models
NASA Astrophysics Data System (ADS)
Jiji, G. Wiselin; Devi, R. Naveena
2017-12-01
The data acquired through remote sensing satellites furnish facts about the land and water at varying resolutions and has been widely used for several change detection studies. Apart from the existence of many change detection methodologies and techniques, emergence of new ones continues to subsist. Existing change detection techniques exploit images that are either in gray scale or RGB color model. In this paper we introduced color models for performing change detection for water pollution. Here the polluted lakes are classified and post-classification change detection techniques are applied to RGB images and results obtained are analysed for changes to exist or not. Furthermore RGB images obtained after classification when converted to any of the two color models YCbCr and YIQ is found to produce the same results as that of the RGB model images. Thus it can be concluded that other color models like YCbCr, YIQ can be used as substitution to RGB color model for analysing change detection with regard to water pollution.
A real time Pegasus propulsion system model for VSTOL piloted simulation evaluation
NASA Technical Reports Server (NTRS)
Mihaloew, J. R.; Roth, S. P.; Creekmore, R.
1981-01-01
A real time propulsion system modeling technique suitable for use in man-in-the-loop simulator studies was developd. This technique provides the system accuracy, stability, and transient response required for integrated aircraft and propulsion control system studies. A Pegasus-Harrier propulsion system was selected as a baseline for developing mathematical modeling and simulation techniques for VSTOL. Initially, static and dynamic propulsion system characteristics were modeled in detail to form a nonlinear aerothermodynamic digital computer simulation of a Pegasus engine. From this high fidelity simulation, a real time propulsion model was formulated by applying a piece-wise linear state variable methodology. A hydromechanical and water injection control system was also simulated. The real time dynamic model includes the detail and flexibility required for the evaluation of critical control parameters and propulsion component limits over a limited flight envelope. The model was programmed for interfacing with a Harrier aircraft simulation. Typical propulsion system simulation results are presented.
Navas, Juan Moreno; Telfer, Trevor C; Ross, Lindsay G
2011-08-01
Combining GIS with neuro-fuzzy modeling has the advantage that expert scientific knowledge in coastal aquaculture activities can be incorporated into a geospatial model to classify areas particularly vulnerable to pollutants. Data on the physical environment and its suitability for aquaculture in an Irish fjard, which is host to a number of different aquaculture activities, were derived from a three-dimensional hydrodynamic and GIS models. Subsequent incorporation into environmental vulnerability models, based on neuro-fuzzy techniques, highlighted localities particularly vulnerable to aquaculture development. The models produced an overall classification accuracy of 85.71%, with a Kappa coefficient of agreement of 81%, and were sensitive to different input parameters. A statistical comparison between vulnerability scores and nitrogen concentrations in sediment associated with salmon cages showed good correlation. Neuro-fuzzy techniques within GIS modeling classify vulnerability of coastal regions appropriately and have a role in policy decisions for aquaculture site selection. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Okawa, Tsutomu; Kaminishi, Tsukasa; Kojima, Yoshiyuki; Hirabayashi, Syuichi; Koizumi, Hisao
Business process modeling (BPM) is gaining attention as a measure of analysis and improvement of the business process. BPM analyses the current business process as an AS-IS model and solves problems to improve the current business and moreover it aims to create a business process, which produces values, as a TO-BE model. However, researches of techniques that connect the business process improvement acquired by BPM to the implementation of the information system seamlessly are rarely reported. If the business model obtained by BPM is converted into UML, and the implementation can be carried out by the technique of UML, we can expect the improvement in efficiency of information system implementation. In this paper, we describe a method of the system development, which converts the process model obtained by BPM into UML and the method is evaluated by modeling a prototype of a parts procurement system. In the evaluation, comparison with the case where the system is implemented by the conventional UML technique without going via BPM is performed.
One technique for refining the global Earth gravity models
NASA Astrophysics Data System (ADS)
Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.
2017-01-01
The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.
Verification of component mode techniques for flexible multibody systems
NASA Technical Reports Server (NTRS)
Wiens, Gloria J.
1990-01-01
Investigations were conducted in the modeling aspects of flexible multibodies undergoing large angular displacements. Models were to be generated and analyzed through application of computer simulation packages employing the 'component mode synthesis' techniques. Multibody Modeling, Verification and Control Laboratory (MMVC) plan was implemented, which includes running experimental tests on flexible multibody test articles. From these tests, data was to be collected for later correlation and verification of the theoretical results predicted by the modeling and simulation process.
Detection of Erroneous Payments Utilizing Supervised And Unsupervised Data Mining Techniques
2004-09-01
will look at which statistical analysis technique will work best in developing and enhancing existing erroneous payment models . Chapter I and II... payment models that are used for selection of records to be audited. The models are set up such that if two or more records have the same payment...Identification Number, Invoice Number and Delivery Order Number are not compared. The DM0102 Duplicate Payment Model will be analyzed in this thesis
SAINT: A combined simulation language for modeling man-machine systems
NASA Technical Reports Server (NTRS)
Seifert, D. J.
1979-01-01
SAINT (Systems Analysis of Integrated Networks of Tasks) is a network modeling and simulation technique for design and analysis of complex man machine systems. SAINT provides the conceptual framework for representing systems that consist of discrete task elements, continuous state variables, and interactions between them. It also provides a mechanism for combining human performance models and dynamic system behaviors in a single modeling structure. The SAINT technique is described and applications of the SAINT are discussed.
Models and techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1978-01-01
The development of system models that can provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer are described. Specific topics covered include: system models; performability evaluation; capability and functional dependence; computation of trajectory set probabilities; and hierarchical modeling of an air transport mission.
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Koppen, Sandra V.; Ely, Jay J.; Williams, Reuben A.; Smith, Laura J.; Salud, Maria Theresa P.
2004-01-01
This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.
Zhao, B.; Wang, S. X.; Xing, J.; ...
2015-01-30
An innovative extended response surface modeling technique (ERSM v1.0) is developed to characterize the nonlinear response of fine particles (PM₂̣₅) to large and simultaneous changes of multiple precursor emissions from multiple regions and sectors. The ERSM technique is developed based on the conventional response surface modeling (RSM) technique; it first quantifies the relationship between PM₂̣₅ concentrations and the emissions of gaseous precursors from each single region using the conventional RSM technique, and then assesses the effects of inter-regional transport of PM₂̣₅ and its gaseous precursors on PM₂̣₅ concentrations in the target region. We apply this novel technique with a widelymore » used regional chemical transport model (CTM) over the Yangtze River delta (YRD) region of China, and evaluate the response of PM₂̣₅ and its inorganic components to the emissions of 36 pollutant–region–sector combinations. The predicted PM₂̣₅ concentrations agree well with independent CTM simulations; the correlation coefficients are larger than 0.98 and 0.99, and the mean normalized errors (MNEs) are less than 1 and 2% for January and August, respectively. It is also demonstrated that the ERSM technique could reproduce fairly well the response of PM₂̣₅ to continuous changes of precursor emission levels between zero and 150%. Employing this new technique, we identify the major sources contributing to PM₂̣₅ and its inorganic components in the YRD region. The nonlinearity in the response of PM₂̣₅ to emission changes is characterized and the underlying chemical processes are illustrated.« less
Lee, Jung Keun; Oh, Jong Jin; Lee, Sangchul; Lee, Seung Bae; Byun, Seok-Soo; Lee, Sang Eun; Jeong, Chang Wook
2016-04-01
We developed a sliding-loop technique that narrowed both sides of the parenchyma in a porcine model and compared it with the conventional sliding-clip technique. Three pigs (30-40 kg) were reused following another experiment conducted by the same researchers. Bilateral kidneys were harvested within 30 minutes after euthanasia. Two partial nephrectomies per kidney were performed on opposite surfaces. All kidney defects were of the same size (diameter of 2.5-3 cm with a depth of 1.0-1.5 cm). The sliding-clip technique and sliding-loop technique were performed separately. In the sliding-loop technique, we created a 1-cm loop at the end of a Vicryl and placed a tetrafluoroethylene polymer pledget in front of the knots passing through the needle. The needle then crossed the loop after passing through the renal parenchyma. A Weck clip was placed and slid on one side to tighten the suture. Tightening was controlled with an equivalent force using a digital push-pull gauge. Three stitches were placed at each renorrhaphy site. The distance between repaired renal surfaces was measured at 5 different points (3 suture sites and 2 middle sites between sutures). The results of the 2 techniques were compared by using the independent t test. The mean distance between renal surfaces was significantly narrower in the sliding-loop technique than in the conventional technique (1.80 ± 1.08 mm vs 5.28 ± 2.46 mm, P < .001). In the porcine model, the sliding-loop technique more effectively closed the partial nephrectomy defects compared with the conventional sliding-clip technique. © The Author(s) 2015.
Degeling, Koen; Schivo, Stefano; Mehra, Niven; Koffijberg, Hendrik; Langerak, Rom; de Bono, Johann S; IJzerman, Maarten J
2017-12-01
With the advent of personalized medicine, the field of health economic modeling is being challenged and the use of patient-level dynamic modeling techniques might be required. To illustrate the usability of two such techniques, timed automata (TA) and discrete event simulation (DES), for modeling personalized treatment decisions. An early health technology assessment on the use of circulating tumor cells, compared with prostate-specific antigen and bone scintigraphy, to inform treatment decisions in metastatic castration-resistant prostate cancer was performed. Both modeling techniques were assessed quantitatively, in terms of intermediate outcomes (e.g., overtreatment) and health economic outcomes (e.g., net monetary benefit). Qualitatively, among others, model structure, agent interactions, data management (i.e., importing and exporting data), and model transparency were assessed. Both models yielded realistic and similar intermediate and health economic outcomes. Overtreatment was reduced by 6.99 and 7.02 weeks by applying circulating tumor cell as a response marker at a net monetary benefit of -€1033 and -€1104 for the TA model and the DES model, respectively. Software-specific differences were observed regarding data management features and the support for statistical distributions, which were considered better for the DES software. Regarding method-specific differences, interactions were modeled more straightforward using TA, benefiting from its compositional model structure. Both techniques prove suitable for modeling personalized treatment decisions, although DES would be preferred given the current software-specific limitations of TA. When these limitations are resolved, TA would be an interesting modeling alternative if interactions are key or its compositional structure is useful to manage multi-agent complex problems. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
An Evaluation of Understandability of Patient Journey Models in Mental Health.
Percival, Jennifer; McGregor, Carolyn
2016-07-28
There is a significant trend toward implementing health information technology to reduce administrative costs and improve patient care. Unfortunately, little awareness exists of the challenges of integrating information systems with existing clinical practice. The systematic integration of clinical processes with information system and health information technology can benefit the patients, staff, and the delivery of care. This paper presents a comparison of the degree of understandability of patient journey models. In particular, the authors demonstrate the value of a relatively new patient journey modeling technique called the Patient Journey Modeling Architecture (PaJMa) when compared with traditional manufacturing based process modeling tools. The paper also presents results from a small pilot case study that compared the usability of 5 modeling approaches in a mental health care environment. Five business process modeling techniques were used to represent a selected patient journey. A mix of both qualitative and quantitative methods was used to evaluate these models. Techniques included a focus group and survey to measure usability of the various models. The preliminary evaluation of the usability of the 5 modeling techniques has shown increased staff understanding of the representation of their processes and activities when presented with the models. Improved individual role identification throughout the models was also observed. The extended version of the PaJMa methodology provided the most clarity of information flows for clinicians. The extended version of PaJMa provided a significant improvement in the ease of interpretation for clinicians and increased the engagement with the modeling process. The use of color and its effectiveness in distinguishing the representation of roles was a key feature of the framework not present in other modeling approaches. Future research should focus on extending the pilot case study to a more diversified group of clinicians and health care support workers.
NASA Astrophysics Data System (ADS)
Santos, Léonard; Thirel, Guillaume; Perrin, Charles
2018-04-01
In many conceptual rainfall-runoff models, the water balance differential equations are not explicitly formulated. These differential equations are solved sequentially by splitting the equations into terms that can be solved analytically with a technique called operator splitting
. As a result, only the solutions of the split equations are used to present the different models. This article provides a methodology to make the governing water balance equations of a bucket-type rainfall-runoff model explicit and to solve them continuously. This is done by setting up a comprehensive state-space representation of the model. By representing it in this way, the operator splitting, which makes the structural analysis of the model more complex, could be removed. In this state-space representation, the lag functions (unit hydrographs), which are frequent in rainfall-runoff models and make the resolution of the representation difficult, are first replaced by a so-called Nash cascade
and then solved with a robust numerical integration technique. To illustrate this methodology, the GR4J model is taken as an example. The substitution of the unit hydrographs with a Nash cascade, even if it modifies the model behaviour when solved using operator splitting, does not modify it when the state-space representation is solved using an implicit integration technique. Indeed, the flow time series simulated by the new representation of the model are very similar to those simulated by the classic model. The use of a robust numerical technique that approximates a continuous-time model also improves the lag parameter consistency across time steps and provides a more time-consistent model with time-independent parameters.
Kumar, M Praveen; Patil, Suneel G; Dheeraj, Bhandari; Reddy, Keshav; Goel, Dinker; Krishna, Gopi
2015-06-01
The difficulty in obtaining an acceptable impression increases exponentially as the number of abutments increases. Accuracy of the impression material and the use of a suitable impression technique are of utmost importance in the fabrication of a fixed partial denture. This study compared the accuracy of the matrix impression system with conventional putty reline and multiple mix technique for individual dies by comparing the inter-abutment distance in the casts obtained from the impressions. Three groups, 10 impressions each with three impression techniques (matrix impression system, putty reline technique and multiple mix technique) were made of a master die. Typodont teeth were embedded in a maxillary frasaco model base. The left first premolar was removed to create a three-unit fixed partial denture situation and the left canine and second premolar were prepared conservatively, and hatch marks were made on the abutment teeth. The final casts obtained from the impressions were examined under a profile projector and the inter-abutment distance was calculated for all the casts and compared. The results from this study showed that in the mesiodistal dimensions the percentage deviation from master model in Group I was 0.1 and 0.2, in Group II was 0.9 and 0.3, and Group III was 1.6 and 1.5, respectively. In the labio-palatal dimensions the percentage deviation from master model in Group I was 0.01 and 0.4, Group II was 1.9 and 1.3, and Group III was 2.2 and 2.0, respectively. In the cervico-incisal dimensions the percentage deviation from the master model in Group I was 1.1 and 0.2, Group II was 3.9 and 1.7, and Group III was 1.9 and 3.0, respectively. In the inter-abutment dimension of dies, percentage deviation from master model in Group I was 0.1, Group II was 0.6, and Group III was 1.0. The matrix impression system showed more accuracy of reproduction for individual dies when compared with putty reline technique and multiple mix technique in all the three directions, as well as the inter-abutment distance.
Experimental Validation Techniques for the Heleeos Off-Axis Laser Propagation Model
2010-03-01
EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER PROPAGATION MODEL THESIS John Haiducek, 1st Lt, USAF AFIT/GAP/ENP/10-M07 DEPARTMENT...Department of Defense, or the United States Government. AFIT/GAP/ENP/10-M07 EXPERIMENTAL VALIDATION TECHNIQUES FOR THE HELEEOS OFF-AXIS LASER ...BS, Physics 1st Lt, USAF March 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT/GAP/ENP/10-M07 Abstract The High Energy Laser End-to-End
2011-02-01
seakeeping was the transient wave technique, developed analytically by Davis and Zarnick (1964). At the David Taylor Model Basin, Davis and Zarnick, and...Gersten and Johnson (1969) applied the transient wave technique to regular wave model experiments for heave and pitch, at zero forward speed. These...tests demonstrated a potential reduction by an order of magnitude of the total necessary testing time. The transient wave technique was also applied to
Kahramangil, Bora; Mohsin, Khuzema; Alzahrani, Hassan; Bu Ali, Daniah; Tausif, Syed; Kang, Sang-Wook; Kandil, Emad
2017-01-01
Background Numerous new approaches have been described over the years to improve the cosmetic outcomes of thyroid surgery. Transoral approach is a new technique that aims to achieve superior cosmetic outcomes by concealing the incision in the oral cavity. Methods Transoral thyroidectomy through vestibular approach was performed in two institutions on cadaveric models. Procedure was performed endoscopically in one institution, while the robotic technique was utilized at the other. Results Transoral thyroidectomy was successfully performed at both institutions with robotic and endoscopic techniques. All vital structures were identified and preserved. Conclusions Transoral thyroidectomy has been performed in animal and cadaveric models, as well as in some clinical studies. Our initial experience indicates the feasibility of this approach. More clinical studies are required to elucidate its full utility. PMID:29302476
A numerical projection technique for large-scale eigenvalue problems
NASA Astrophysics Data System (ADS)
Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang
2011-10-01
We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.
Improvements in approaches to forecasting and evaluation techniques
NASA Astrophysics Data System (ADS)
Weatherhead, Elizabeth
2014-05-01
The US is embarking on an experiment to make significant and sustained improvements in weather forecasting. The effort stems from a series of community conversations that recognized the rapid advancements in observations, modeling and computing techniques in the academic, governmental and private sectors. The new directions and initial efforts will be summarized, including information on possibilities for international collaboration. Most new projects are scheduled to start in the last half of 2014. Several advancements include ensemble forecasting with global models, and new sharing of computing resources. Newly developed techniques for evaluating weather forecast models will be presented in detail. The approaches use statistical techniques that incorporate pair-wise comparisons of forecasts with observations and account for daily auto-correlation to assess appropriate uncertainty in forecast changes. Some of the new projects allow for international collaboration, particularly on the research components of the projects.
A model-based 3D template matching technique for pose acquisition of an uncooperative space object.
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele
2015-03-16
This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented.
ERIC Educational Resources Information Center
Wholeben, Brent Edward
This report describing the use of operations research techniques to determine which courseware packages or what microcomputer systems best address varied instructional objectives focuses on the MICROPIK model, a highly structured evaluation technique for making such complex instructional decisions. MICROPIK is a multiple alternatives model (MAA)…
Sridhar, Upasana Manimegalai; Govindarajan, Anand; Rhinehart, R Russell
2016-01-01
This work reveals the applicability of a relatively new optimization technique, Leapfrogging, for both nonlinear regression modeling and a methodology for nonlinear model-predictive control. Both are relatively simple, yet effective. The application on a nonlinear, pilot-scale, shell-and-tube heat exchanger reveals practicability of the techniques. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Using Game Theory Techniques and Concepts to Develop Proprietary Models for Use in Intelligent Games
ERIC Educational Resources Information Center
Christopher, Timothy Van
2011-01-01
This work is about analyzing games as models of systems. The goal is to understand the techniques that have been used by game designers in the past, and to compare them to the study of mathematical game theory. Through the study of a system or concept a model often emerges that can effectively educate students about making intelligent decisions…
Control system design for flexible structures using data models
NASA Technical Reports Server (NTRS)
Irwin, R. Dennis; Frazier, W. Garth; Mitchell, Jerrel R.; Medina, Enrique A.; Bukley, Angelia P.
1993-01-01
The dynamics and control of flexible aerospace structures exercises many of the engineering disciplines. In recent years there has been considerable research in the developing and tailoring of control system design techniques for these structures. This problem involves designing a control system for a multi-input, multi-output (MIMO) system that satisfies various performance criteria, such as vibration suppression, disturbance and noise rejection, attitude control and slewing control. Considerable progress has been made and demonstrated in control system design techniques for these structures. The key to designing control systems for these structures that meet stringent performance requirements is an accurate model. It has become apparent that theoretically and finite-element generated models do not provide the needed accuracy; almost all successful demonstrations of control system design techniques have involved using test results for fine-tuning a model or for extracting a model using system ID techniques. This paper describes past and ongoing efforts at Ohio University and NASA MSFC to design controllers using 'data models.' The basic philosophy of this approach is to start with a stabilizing controller and frequency response data that describes the plant; then, iteratively vary the free parameters of the controller so that performance measures become closer to satisfying design specifications. The frequency response data can be either experimentally derived or analytically derived. One 'design-with-data' algorithm presented in this paper is called the Compensator Improvement Program (CIP). The current CIP designs controllers for MIMO systems so that classical gain, phase, and attenuation margins are achieved. The center-piece of the CIP algorithm is the constraint improvement technique which is used to calculate a parameter change vector that guarantees an improvement in all unsatisfied, feasible performance metrics from iteration to iteration. The paper also presents a recently demonstrated CIP-type algorithm, called the Model and Data Oriented Computer-Aided Design System (MADCADS), developed for achieving H(sub infinity) type design specifications using data models. Control system design for the NASA/MSFC Single Structure Control Facility are demonstrated for both CIP and MADCADS. Advantages of design-with-data algorithms over techniques that require analytical plant models are also presented.
Nonlinear dynamic macromodeling techniques for audio systems
NASA Astrophysics Data System (ADS)
Ogrodzki, Jan; Bieńkowski, Piotr
2015-09-01
This paper develops a modelling method and a models identification technique for the nonlinear dynamic audio systems. Identification is performed by means of a behavioral approach based on a polynomial approximation. This approach makes use of Discrete Fourier Transform and Harmonic Balance Method. A model of an audio system is first created and identified and then it is simulated in real time using an algorithm of low computational complexity. The algorithm consists in real time emulation of the system response rather than in simulation of the system itself. The proposed software is written in Python language using object oriented programming techniques. The code is optimized for a multithreads environment.
Dierker, Lisa; Rose, Jennifer; Tan, Xianming; Li, Runze
2010-12-01
This paper describes and compares a selection of available modeling techniques for identifying homogeneous population subgroups in the interest of informing targeted substance use intervention. We present a nontechnical review of the common and unique features of three methods: (a) trajectory analysis, (b) functional hierarchical linear modeling (FHLM), and (c) decision tree methods. Differences among the techniques are described, including required data features, strengths and limitations in terms of the flexibility with which outcomes and predictors can be modeled, and the potential of each technique for helping to inform the selection of targets and timing of substance intervention programs.
Use of the Box and Jenkins time series technique in traffic forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nihan, N.L.; Holmesland, K.O.
The use of recently developed time series techniques for short-term traffic volume forecasting is examined. A data set containing monthly volumes on a freeway segment for 1968-76 is used to fit a time series model. The resultant model is used to forecast volumes for 1977. The forecast volumes are then compared with actual volumes in 1977. Time series techniques can be used to develop highly accurate and inexpensive short-term forecasts. The feasibility of using these models to evaluate the effects of policy changes or other outside impacts is considered. (1 diagram, 1 map, 14 references,2 tables)
A comparison of force sensing techniques for planetary manipulation
NASA Technical Reports Server (NTRS)
Helmick, Daniel; Okon, Avi; DiCicco, Matt
2006-01-01
Five techniques for sensing forces with a manipulator are compared analytically and experimentally. The techniques compared are: a six-axis wrist force/torque sensor, joint torque sensors, link strain gauges, motor current sensors, and flexibility modeling. The accuracy and repeatability fo each technique is quantified and compared.
Artificial Intelligence Techniques for Predicting and Mapping Daily Pan Evaporation
NASA Astrophysics Data System (ADS)
Arunkumar, R.; Jothiprakash, V.; Sharma, Kirty
2017-09-01
In this study, Artificial Intelligence techniques such as Artificial Neural Network (ANN), Model Tree (MT) and Genetic Programming (GP) are used to develop daily pan evaporation time-series (TS) prediction and cause-effect (CE) mapping models. Ten years of observed daily meteorological data such as maximum temperature, minimum temperature, relative humidity, sunshine hours, dew point temperature and pan evaporation are used for developing the models. For each technique, several models are developed by changing the number of inputs and other model parameters. The performance of each model is evaluated using standard statistical measures such as Mean Square Error, Mean Absolute Error, Normalized Mean Square Error and correlation coefficient (R). The results showed that daily TS-GP (4) model predicted better with a correlation coefficient of 0.959 than other TS models. Among various CE models, CE-ANN (6-10-1) resulted better than MT and GP models with a correlation coefficient of 0.881. Because of the complex non-linear inter-relationship among various meteorological variables, CE mapping models could not achieve the performance of TS models. From this study, it was found that GP performs better for recognizing single pattern (time series modelling), whereas ANN is better for modelling multiple patterns (cause-effect modelling) in the data.
DOT National Transportation Integrated Search
2011-01-01
Travel demand modeling plays a key role in the transportation system planning and evaluation process. The four-step sequential travel demand model is the most widely used technique in practice. Traffic assignment is the key step in the conventional f...
Active control of large space structures: An introduction and overview
NASA Technical Reports Server (NTRS)
Doane, G. B., III; Tollison, D. K.; Waites, H. B.
1985-01-01
An overview of the large space structure (LSS) control system design problem is presented. The LSS is defined as a class of system, and LSS modeling techniques are discussed. Model truncation, control system objectives, current control law design techniques, and particular problem areas are discussed.
Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model
ERIC Educational Resources Information Center
Lamsal, Sunil
2015-01-01
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models
2008-08-01
Level Outputs Campaign Level Model Campaign Level Outputs Aggregation Metamodeling Complexity (Spatial, Temporal, etc.) Others? Apply VRT (type......reduction, are called variance reduction techniques ( VRT ) [Law, 2006]. The implementation of some type of VRT can prove to be a very valuable tool
NASA Technical Reports Server (NTRS)
Lee, J.
1994-01-01
A generalized flow solver using an implicit Lower-upper (LU) diagonal decomposition based numerical technique has been coupled with three low-Reynolds number kappa-epsilon models for analysis of problems with engineering applications. The feasibility of using the LU technique to obtain efficient solutions to supersonic problems using the kappa-epsilon model has been demonstrated. The flow solver is then used to explore limitations and convergence characteristics of several popular two equation turbulence models. Several changes to the LU solver have been made to improve the efficiency of turbulent flow predictions. In general, the low-Reynolds number kappa-epsilon models are easier to implement than the models with wall-functions, but require much finer near-wall grid to accurately resolve the physics. The three kappa-epsilon models use different approaches to characterize the near wall regions of the flow. Therefore, the limitations imposed by the near wall characteristics have been carefully resolved. The convergence characteristics of a particular model using a given numerical technique are also an important, but most often overlooked, aspect of turbulence model predictions. It is found that some convergence characteristics could be sacrificed for more accurate near-wall prediction. However, even this gain in accuracy is not sufficient to model the effects of an external pressure gradient imposed by a shock-wave/ boundary-layer interaction. Additional work on turbulence models, especially for compressibility, is required since the solutions obtained with base line turbulence are in only reasonable agreement with the experimental data for the viscous interaction problems.
Maloney, Kelly O.; Schmid, Matthias; Weller, Donald E.
2012-01-01
Issues with ecological data (e.g. non-normality of errors, nonlinear relationships and autocorrelation of variables) and modelling (e.g. overfitting, variable selection and prediction) complicate regression analyses in ecology. Flexible models, such as generalized additive models (GAMs), can address data issues, and machine learning techniques (e.g. gradient boosting) can help resolve modelling issues. Gradient boosted GAMs do both. Here, we illustrate the advantages of this technique using data on benthic macroinvertebrates and fish from 1573 small streams in Maryland, USA.
NASA Astrophysics Data System (ADS)
Rahmes, Mark; Yates, J. Harlan; Allen, Josef DeVaughn; Kelley, Patrick
2007-04-01
High resolution Digital Surface Models (DSMs) may contain voids (missing data) due to the data collection process used to obtain the DSM, inclement weather conditions, low returns, system errors/malfunctions for various collection platforms, and other factors. DSM voids are also created during bare earth processing where culture and vegetation features have been extracted. The Harris LiteSite TM Toolkit handles these void regions in DSMs via two novel techniques. We use both partial differential equations (PDEs) and exemplar based inpainting techniques to accurately fill voids. The PDE technique has its origin in fluid dynamics and heat equations (a particular subset of partial differential equations). The exemplar technique has its origin in texture analysis and image processing. Each technique is optimally suited for different input conditions. The PDE technique works better where the area to be void filled does not have disproportionately high frequency data in the neighborhood of the boundary of the void. Conversely, the exemplar based technique is better suited for high frequency areas. Both are autonomous with respect to detecting and repairing void regions. We describe a cohesive autonomous solution that dynamically selects the best technique as each void is being repaired.
A progress report on seismic model studies
Healy, J.H.; Mangan, G.B.
1963-01-01
The value of seismic-model studies as an aid to understanding wave propagation in the Earth's crust was recognized by early investigators (Tatel and Tuve, 1955). Preliminary model results were very promising, but progress in model seismology has been restricted by two problems: (1) difficulties in the development of models with continuously variable velocity-depth functions, and (2) difficulties in the construction of models of adequate size to provide a meaningful wave-length to layer-thickness ratio. The problem of a continuously variable velocity-depth function has been partly solved by a technique using two-dimensional plate models constructed by laminating plastic to aluminum, so that the ratio of plastic to aluminum controls the velocity-depth function (Healy and Press, 1960). These techniques provide a continuously variable velocity-depth function, but it is not possible to construct such models large enough to study short-period wave propagation in the crust. This report describes improvements in our ability to machine large models. Two types of models are being used: one is a cylindrical aluminum tube machined on a lathe, and the other is a large plate machined on a precision planer. Both of these modeling techniques give promising results and are a significant improvement over earlier efforts.
Accuracy of impression scanning compared with stone casts of implant impressions.
Matta, Ragai Edward; Adler, Werner; Wichmann, Manfred; Heckmann, Siegfried Martin
2017-04-01
Accurate virtual implant models are a necessity for the fabrication of precisely fitting superstructures. The purpose of this in vitro study was to evaluate different methods with which to build an accurate virtual model of a 3-dimensional implant in the oral cavity; this model would then be used for iterative computer-aided design and computer-aided manufacturing (CAD-CAM) procedures. A titanium master model with 3 rigidly connected implants was manufactured and digitized with a noncontact industrial scanner to obtain a virtual master model. Impressions of the master model with the implant position locators (IPL) were made using vinyl siloxanether material. The impressions were scanned (Impression scanning technique group). For the transfer technique and pick-up technique groups (each group n=20), implant analogs were inserted into the impression copings, impressions were made using polyether, and casts were poured in Type 4 gypsum. The IPLs were screwed into the analogs and scanned. To compare the virtual master model with each virtual test model, a CAD interactive software, ATOS professional, was applied. The Kruskal-Wallis test was subsequently used to determine the overall difference between groups, with the Mann-Whitney U test used for pairwise comparisons. Through Bonferroni correction, the α-level was set to .017. The outcome revealed a significant difference among the 3 groups (P<.01) in terms of accuracy. With regard to total deviation, for all axes, the transfer technique generated the greatest divergence, 0.078 mm (±0.022), compared with the master model. Deviation with the pick-up technique was 0.041 mm (±0.009), with impression scanning generating the most accurate models with a deviation of 0.022 mm (±0.007). The impression scanning method improved the precision of CAD-CAM-fabricated superstructures. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Elik, Aysel; Yanık, Derya Koçak; Maskan, Medeni; Göğüş, Fahrettin
2016-05-01
The present study was undertaken to assess the effects of three different concentration processes open-pan, rotary vacuum evaporator and microwave heating on evaporation rate, the color and phenolics content of blueberry juice. Kinetics model study for changes in soluble solids content (°Brix), color parameters and phenolics content during evaporation was also performed. The final juice concentration of 65° Brix was achieved in 12, 15, 45 and 77 min, for microwave at 250 and 200 W, rotary vacuum and open-pan evaporation processes, respectively. Color changes associated with heat treatment were monitored using Hunter colorimeter (L*, a* and b*). All Hunter color parameters decreased with time and dependently studied concentration techniques caused color degradation. It was observed that the severity of color loss was higher in open-pan technique than the others. Evaporation also affected total phenolics content in blueberry juice. Total phenolics loss during concentration was highest in open-pan technique (36.54 %) and lowest in microwave heating at 200 W (34.20 %). So, the use of microwave technique could be advantageous in food industry because of production of blueberry juice concentrate with a better quality and short time of operation. A first-order kinetics model was applied to modeling changes in soluble solids content. A zero-order kinetics model was used to modeling changes in color parameters and phenolics content.
Yu, Quan; Gong, Xin; Wang, Guo-Min; Yu, Zhe-Yuan; Qian, Yu-Fen; Shen, Gang
2011-01-01
To establish a new method of presurgical nasoalveolar molding (NAM) using computer-aided reverse engineering and rapid prototyping technique in infants with unilateral cleft lip and palate (UCLP). Five infants (2 males and 3 females with mean age of 1.2 w) with complete UCLP were recruited. All patients were subjected to NAM before the cleft lip repair. The upper denture casts were recorded using a three-dimensional laser scanner within 2 weeks after birth in UCLP infants. A digital model was constructed and analyzed to simulate the NAM procedure with reverse engineering software. The digital geometrical data were exported to print the solid model with rapid prototyping system. The whole set of appliances was fabricated based on these solid models. Laser scanning and digital model construction simplified the NAM procedure and estimated the treatment objective. The appliances were fabricated based on the rapid prototyping technique, and for each patient, the complete set of appliances could be obtained at one time. By the end of presurgical NAM treatment, the cleft was narrowed, and the malformation of nasoalveolar segments was aligned normally. We have developed a novel technique of presurgical NAM based on a computer-aided design. The accurate digital denture model of UCLP infants could be obtained with laser scanning. The treatment design and appliance fabrication could be simplified with a computer-aided reverse engineering and rapid prototyping technique.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
A demonstrative model of a lunar base simulation on a personal computer
NASA Technical Reports Server (NTRS)
1985-01-01
The initial demonstration model of a lunar base simulation is described. This initial model was developed on the personal computer level to demonstrate feasibility and technique before proceeding to a larger computer-based model. Lotus Symphony Version 1.1 software was used to base the demonstration model on an personal computer with an MS-DOS operating system. The personal computer-based model determined the applicability of lunar base modeling techniques developed at an LSPI/NASA workshop. In addition, the personnal computer-based demonstration model defined a modeling structure that could be employed on a larger, more comprehensive VAX-based lunar base simulation. Refinement of this personal computer model and the development of a VAX-based model is planned in the near future.
Active cleaning technique device
NASA Technical Reports Server (NTRS)
Shannon, R. L.; Gillette, R. B.
1973-01-01
The objective of this program was to develop a laboratory demonstration model of an active cleaning technique (ACT) device. The principle of this device is based primarily on the technique for removing contaminants from optical surfaces. This active cleaning technique involves exposing contaminated surfaces to a plasma containing atomic oxygen or combinations of other reactive gases. The ACT device laboratory demonstration model incorporates, in addition to plasma cleaning, the means to operate the device as an ion source for sputtering experiments. The overall ACT device includes a plasma generation tube, an ion accelerator, a gas supply system, a RF power supply and a high voltage dc power supply.