An a priori study of different tabulation methods for turbulent pulverised coal combustion
NASA Astrophysics Data System (ADS)
Luo, Yujuan; Wen, Xu; Wang, Haiou; Luo, Kun; Jin, Hanhui; Fan, Jianren
2018-05-01
In many practical pulverised coal combustion systems, different oxidiser streams exist, e.g. the primary- and secondary-air streams in the power plant boilers, which makes the modelling of these systems challenging. In this work, three tabulation methods for modelling pulverised coal combustion are evaluated through an a priori study. Pulverised coal flames stabilised in a three-dimensional turbulent counterflow, consisting of different oxidiser streams, are simulated with detailed chemistry first. Then, the thermo-chemical quantities calculated with different tabulation methods are compared to those from detailed chemistry solutions. The comparison shows that the conventional two-stream flamelet model with a fixed oxidiser temperature cannot predict the flame temperature correctly. The conventional two-stream flamelet model is then modified to set the oxidiser temperature equal to the fuel temperature, both of which are varied in the flamelets. By this means, the variations of oxidiser temperature can be considered. It is found that this modified tabulation method performs very well on prediction of the flame temperature. The third tabulation method is an extended three-stream flamelet model that was initially proposed for gaseous combustion. The results show that the reference gaseous temperature profile can be overall reproduced by the extended three-stream flamelet model. Interestingly, it is found that the predictions of major species mass fractions are not sensitive to the oxidiser temperature boundary conditions for the flamelet equations in the a priori analyses.
The hybrid RANS/LES of partially premixed supersonic combustion using G/Z flamelet model
NASA Astrophysics Data System (ADS)
Wu, Jinshui; Wang, Zhenguo; Bai, Xuesong; Sun, Mingbo; Wang, Hongbo
2016-10-01
In order to describe partially premixed supersonic combustion numerically, G/Z flamelet model is developed and compared with finite rate model in hybrid RANS/LES simulation to study the strut-injection supersonic combustion flow field designed by the German Aerospace Center. A new temperature calculation method based on time-splitting method of total energy is introduced in G/Z flamelet model. Simulation results show that temperature predictions in partially premixed zone by G/Z flamelet model are more consistent with experiment than finite rate model. It is worth mentioning that low temperature reaction zone behind the strut is well reproduced. Other quantities such as average velocity and average velocity fluctuation obtained by developed G/Z flamelet model are also in good agreement with experiment. Besides, simulation results by G/Z flamelet also reveal the mechanism of partially premixed supersonic combustion by the analyses of the interaction between turbulent burning velocity and flow field.
A flamelet model for supersonic non-premixed combustion with pressure variation
NASA Astrophysics Data System (ADS)
Zhao, Guo-Yan; Sun, Ming-Bo; Wu, Jin-Shui; Wang, Hong-Bo
2015-08-01
A modified flamelet model is proposed for studying supersonic combustion with pressure variation considering that pressure is far from homogenous in a supersonic combustor. In this model, the flamelet database are tabulated at a reference pressure, while quantities at other pressure are obtained using a sixth-order polynomial in pressure. Attributed to merit of the modified model which compute coefficients for the expansion only. And they brought less requirements for memory and table lookup time, expensive cost is avoided. The performance of modified model is much better than the approach of using a flamelet model-based method with tabulation at different pressure values. Two types of hydrogen fueled scramjet combustors were introduced to validate the modified flamelet model. It was observed that the temperature is sensitive to the choice of model in combustion area, which in return will significantly affect the pressure. It was found that the results of modified model were in good agreement with the experimental data compared with the isobaric flamelet model, especially for temperature, whose value is more accurately predicted. It is concluded that the modified flamelet model was more effective for cases with a wide range of pressure variation.
NASA Astrophysics Data System (ADS)
Gao, Zhenxun; Wang, Jingying; Jiang, Chongwen; Lee, Chunhian
2014-11-01
In the framework of Reynolds-averaged Navier-Stokes simulation, supersonic turbulent combustion flows at the German Aerospace Centre (DLR) combustor and Japan Aerospace Exploration Agency (JAXA) integrated scramjet engine are numerically simulated using the flamelet model. Based on the DLR combustor case, theoretical analysis and numerical experiments conclude that: the finite rate model only implicitly considers the large-scale turbulent effect and, due to the lack of the small-scale non-equilibrium effect, it would overshoot the peak temperature compared to the flamelet model in general. Furthermore, high-Mach-number compressibility affects the flamelet model mainly through two ways: the spatial pressure variation and the static enthalpy variation due to the kinetic energy. In the flamelet library, the mass fractions of the intermediate species, e.g. OH, are more sensible to the above two effects than the main species such as H2O. Additionally, in the combustion flowfield where the pressure is larger than the value adopted in the generation of the flamelet library or the conversion from the static enthalpy to the kinetic energy occurs, the temperature obtained by the flamelet model without taking compressibility effects into account would be undershot, and vice versa. The static enthalpy variation effect has only little influence on the temperature simulation of the flamelet model, while the effect of the spatial pressure variation may cause relatively large errors. From the JAXA case, it is found that the flamelet model cannot in general be used for an integrated scramjet engine. The existence of the inlet together with the transverse injection scheme could cause large spatial variations of pressure, so the pressure value adopted for the generation of a flamelet library should be fine-tuned according to a pre-simulation of pure mixing.
Flamelet Model Application for Non-Premixed Turbulent Combustion
NASA Technical Reports Server (NTRS)
Secundov, A.; Bezgin, L.; Buriko, Yu.; Guskov, O.; Kopchenov, V.; Laskin, I.; Lomkov, K.; Tshepin, S.; Volkov, D.; Zaitsev, S.
1996-01-01
The current Final Report contains results of the study which was performed in Scientific Research Center 'ECOLEN' (Moscow, Russia). The study concerns the development and verification of non-expensive approach for modeling of supersonic turbulent diffusion flames based on flamelet consideration of the chemistry/turbulence interaction (FL approach). Research work included: development of the approach and CFD tests of the flamelet model for supersonic jet flames; development of the simplified procedure for solution of the flamelet equations based on partial equilibrium chemistry assumption; study of the flame ignition/extinction predictions provided by flamelet model. The performed investigation demonstrated that FL approach allowed to describe satisfactory main features of supersonic H 2/air jet flames. Model demonstrated also high capabilities for reduction of the computational expenses in CFD modeling of the supersonic flames taking into account detailed oxidation chemistry. However, some disadvantages and restrictions of the existing version of approach were found in this study. They were: (1) inaccuracy in predictions of the passive scalar statistics by our turbulence model for one of the considered test cases; and (2) applicability of the available version of the flamelet model to flames without large ignition delay distance only. Based on the results of the performed investigation, we formulated and submitted to the National Aeronautics and Space Administration our Project Proposal for the next step research directed toward further improvement of the FL approach.
An equivalent dissipation rate model for capturing history effects in non-premixed flames
Kundu, Prithwish; Echekki, Tarek; Pei, Yuanjiang; ...
2016-11-11
The effects of strain rate history on turbulent flames have been studied in the. past decades with 1D counter flow diffusion flame (CFDF) configurations subjected to oscillating strain rates. In this work, these unsteady effects are studied for complex hydrocarbon fuel surrogates at engine relevant conditions with unsteady strain rates experienced by flamelets in a typical spray flame. Tabulated combustion models are based on a steady scalar dissipation rate (SDR) assumption and hence cannot capture these unsteady strain effects; even though they can capture the unsteady chemistry. In this work, 1D CFDF with varying strain rates are simulated using twomore » different modeling approaches: steady SDR assumption and unsteady flamelet model. Comparative studies show that the history effects due to unsteady SDR are directly proportional to the temporal gradient of the SDR. A new equivalent SDR model based on the history of a flamelet is proposed. An averaging procedure is constructed such that the most recent histories are given higher weights. This equivalent SDR is then used with the steady SDR assumption in 1D flamelets. Results show a good agreement between tabulated flamelet solution and the unsteady flamelet results. This equivalent SDR concept is further implemented and compared against 3D spray flames (Engine Combustion Network Spray A). Tabulated models based on steady SDR assumption under-predict autoignition and flame lift-off when compared with an unsteady Representative Interactive Flamelet (RIF) model. However, equivalent SDR model coupled with the tabulated model predicted autoignition and flame lift-off very close to those reported by the RIF model. This model is further validated for a range of injection pressures for Spray A flames. As a result, the new modeling framework now enables tabulated models with significantly lower computational cost to account for unsteady history effects.« less
An equivalent dissipation rate model for capturing history effects in non-premixed flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kundu, Prithwish; Echekki, Tarek; Pei, Yuanjiang
The effects of strain rate history on turbulent flames have been studied in the. past decades with 1D counter flow diffusion flame (CFDF) configurations subjected to oscillating strain rates. In this work, these unsteady effects are studied for complex hydrocarbon fuel surrogates at engine relevant conditions with unsteady strain rates experienced by flamelets in a typical spray flame. Tabulated combustion models are based on a steady scalar dissipation rate (SDR) assumption and hence cannot capture these unsteady strain effects; even though they can capture the unsteady chemistry. In this work, 1D CFDF with varying strain rates are simulated using twomore » different modeling approaches: steady SDR assumption and unsteady flamelet model. Comparative studies show that the history effects due to unsteady SDR are directly proportional to the temporal gradient of the SDR. A new equivalent SDR model based on the history of a flamelet is proposed. An averaging procedure is constructed such that the most recent histories are given higher weights. This equivalent SDR is then used with the steady SDR assumption in 1D flamelets. Results show a good agreement between tabulated flamelet solution and the unsteady flamelet results. This equivalent SDR concept is further implemented and compared against 3D spray flames (Engine Combustion Network Spray A). Tabulated models based on steady SDR assumption under-predict autoignition and flame lift-off when compared with an unsteady Representative Interactive Flamelet (RIF) model. However, equivalent SDR model coupled with the tabulated model predicted autoignition and flame lift-off very close to those reported by the RIF model. This model is further validated for a range of injection pressures for Spray A flames. As a result, the new modeling framework now enables tabulated models with significantly lower computational cost to account for unsteady history effects.« less
Flamelet Formation In Hele-Shaw Flow
NASA Technical Reports Server (NTRS)
Wichman, I. S.; Olson, S. L.
2003-01-01
A Hele-Shaw flow apparatus constructed at Michigan State University (MSU) produces conditions that reduce influences of buoyancy-driven flows. In addition, in the MSU Hele-Shaw apparatus it is possible to adjust the heat losses from the fuel sample (0.001 in. thick cellulose) and the flow speed of the approaching oxidizer flow (air) so that the "flamelet regime of flame spread" is entered. In this regime various features of the flame-to-smolder (and vice versa) transition can be studied. For the relatively wide (approx. 17.5 cm) and long (approx. 20 cm) samples used, approximately ten flamelets existed at all times. The flamelet behavior was studied mechanistically and statistically. A heat transfer analysis of the dominant heat transfer mechanisms was conducted. Results indicate that radiation and conduction processes are important, and that a simple 1-D model using the Broido-Shafizadeh model for cellulose decomposition chemistry can describe aspects of the flamelet spread process. Introduction
Inadequacy representation of flamelet-based RANS model for turbulent non-premixed flame
NASA Astrophysics Data System (ADS)
Lee, Myoungkyu; Oliver, Todd; Moser, Robert
2017-11-01
Stochastic representations for model inadequacy in RANS-based models of non-premixed jet flames are developed and explored. Flamelet-based RANS models are attractive for engineering applications relative to higher-fidelity methods because of their low computational costs. However, the various assumptions inherent in such models introduce errors that can significantly affect the accuracy of computed quantities of interest. In this work, we develop an approach to represent the model inadequacy of the flamelet-based RANS model. In particular, we pose a physics-based, stochastic PDE for the triple correlation of the mixture fraction. This additional uncertain state variable is then used to construct perturbations of the PDF for the instantaneous mixture fraction, which is used to obtain an uncertain perturbation of the flame temperature. A hydrogen-air non-premixed jet flame is used to demonstrate the representation of the inadequacy of the flamelet-based RANS model. This work was supported by DARPA-EQUiPS(Enabling Quantification of Uncertainty in Physical Systems) program.
NASA Technical Reports Server (NTRS)
Quinlan, Jesse R.; Drozda, Tomasz G.; McDaniel, James C.; Lacaze, Guilhem; Oefelein, Joseph
2015-01-01
In an effort to make large eddy simulation of hydrocarbon-fueled scramjet combustors more computationally accessible using realistic chemical reaction mechanisms, a compressible flamelet/progress variable (FPV) model was proposed that extends current FPV model formulations to high-speed, compressible flows. Development of this model relied on observations garnered from an a priori analysis of the Reynolds-Averaged Navier-Stokes (RANS) data obtained for the Hypersonic International Flight Research and Experimentation (HI-FiRE) dual-mode scramjet combustor. The RANS data were obtained using a reduced chemical mechanism for the combustion of a JP-7 surrogate and were validated using avail- able experimental data. These RANS data were then post-processed to obtain, in an a priori fashion, the scalar fields corresponding to an FPV-based modeling approach. In the current work, in addition to the proposed compressible flamelet model, a standard incompressible FPV model was also considered. Several candidate progress variables were investigated for their ability to recover static temperature and major and minor product species. The effects of pressure and temperature on the tabulated progress variable source term were characterized, and model coupling terms embedded in the Reynolds- averaged Navier-Stokes equations were studied. Finally, results for the novel compressible flamelet/progress variable model were presented to demonstrate the improvement attained by modeling the effects of pressure and flamelet boundary conditions on the combustion.
TABULATED EQUIVALENT SDR FLAMELET (TESF) MODEFL
DOE Office of Scientific and Technical Information (OSTI.GOV)
KUNDU, PRITHWISH; AMEEN, mUHSIN MOHAMMED; UNNIKRISHNAN, UMESH
The code consists of an implementation of a novel tabulated combustion model for non-premixed flames in CFD solvers. This novel technique/model is used to implement an unsteady flamelet tabulation without using progress variables for non-premixed flames. It also has the capability to include history effects which is unique within tabulated flamelet models. The flamelet table generation code can be run in parallel to generate tables with large chemistry mechanisms in relatively short wall clock times. The combustion model/code reads these tables. This framework can be coupled with any CFD solver with RANS as well as LES turbulence models. This frameworkmore » enables CFD solvers to run large chemistry mechanisms with large number of grids at relatively lower computational costs. Currently it has been coupled with the Converge CFD code and validated against available experimental data. This model can be used to simulate non-premixed combustion in a variety of applications like reciprocating engines, gas turbines and industrial burners operating over a wide range of fuels.« less
Multidimensional flamelet-generated manifolds for partially premixed combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Phuc-Danh; Vervisch, Luc; Subramanian, Vallinayagam
2010-01-15
Flamelet-generated manifolds have been restricted so far to premixed or diffusion flame archetypes, even though the resulting tables have been applied to nonpremixed and partially premixed flame simulations. By using a projection of the full set of mass conservation species balance equations into a restricted subset of the composition space, unsteady multidimensional flamelet governing equations are derived from first principles, under given hypotheses. During the projection, as in usual one-dimensional flamelets, the tangential strain rate of scalar isosurfaces is expressed in the form of the scalar dissipation rates of the control parameters of the multidimensional flamelet-generated manifold (MFM), which ismore » tested in its five-dimensional form for partially premixed combustion, with two composition space directions and three scalar dissipation rates. It is shown that strain-rate-induced effects can hardly be fully neglected in chemistry tabulation of partially premixed combustion, because of fluxes across iso-equivalence-ratio and iso-progress-of-reaction surfaces. This is illustrated by comparing the 5D flamelet-generated manifold with one-dimensional premixed flame and unsteady strained diffusion flame composition space trajectories. The formal links between the asymptotic behavior of MFM and stratified flame, weakly varying partially premixed front, triple-flame, premixed and nonpremixed edge flames are also evidenced. (author)« less
Laminar flamelet modeling of turbulent diffusion flames
NASA Technical Reports Server (NTRS)
Mell, W. E.; Kosaly, G.; Planche, O.; Poinsot, T.; Ferziger, J. H.
1990-01-01
In modeling turbulent combustion, decoupling the chemistry from the turbulence is of great practical significance. In cases in which the equilibrium chemistry model breaks down, laminar flamelet modeling (LFM) is a promising approach to decoupling. Here, the validity of this approach is investigated using direct numerical simulation of a simple chemical reaction in two-dimensional turbulence.
Unstrained and strained flamelets for LES of premixed combustion
NASA Astrophysics Data System (ADS)
Langella, Ivan; Swaminathan, Nedunchezhian
2016-05-01
The unstrained and strained flamelet closures for filtered reaction rate in large eddy simulation (LES) of premixed flames are studied. The required sub-grid scale (SGS) PDF in these closures is presumed using the Beta function. The relative performances of these closures are assessed by comparing numerical results from large eddy simulations of piloted Bunsen flames of stoichiometric methane-air mixture with experimental measurements. The strained flamelets closure is observed to underestimate the burn rate and thus the reactive scalars mass fractions are under-predicted with an over-prediction of fuel mass fraction compared with the unstrained flamelet closure. The physical reasons for this relative behaviour are discussed. The results of unstrained flamelet closure compare well with experimental data. The SGS variance of the progress variable required for the presumed PDF is obtained by solving its transport equation. An order of magnitude analysis of this equation suggests that the commonly used algebraic model obtained by balancing source and sink in this transport equation does not hold. This algebraic model is shown to underestimate the SGS variance substantially and the implications of this variance model for the filtered reaction rate closures are highlighted.
Numerical modeling of NO formation in laminar Bunsen flames -- A flamelet approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, C.P.; Chen, J.Y.; Yam, C.G.
1998-08-01
Based on the flamelet concept, a numerical model has been developed for fast predictions of NO{sub x} and CO emissions from laminar flames. The model is applied to studying NO formation in the secondary nonpremixed flame zone of fuel-rich methane Bunsen flames. By solving the steady-state flamelet equations with the detailed GR12.1 methane-air mechanism, a flamelet library is generated containing thermochemical information for a range of scalar dissipation rates at the ambient pressure condition. Modeling of NO formation is made by solving its conservation equation with chemical source term evaluated based on flamelet library using the extended Zeldovich mechanism andmore » NO reburning reactions. The optically-thin radiation heat transfer model is used to explore the potential effect of heat loss on thermal NO formation. The numerical scheme solves the two-dimensional Navier-Stokes equations as well as three additional equations: the mixture fraction, the NO mass fraction, and the enthalpy deficit due to radiative heat loss. With an established flamelet library, typical computing times are about 5 hours per calculation on a DEC-3000 300LX workstation. The predicted mixing field, radial temperature profiles, and NO distributions compare favorably with recent experimental data obtained by Nguyen et al. The dependence of NO{sub x} emission on equivalence ratio is studied numerically and the predictions are found to agree reasonably well with the measurements by Muss. The computed results show a decreasing trend of NO{sub x} emission with the equivalence ratio but an increasing trend in the CO emission index. By examining this trade-off between NO{sub x} and CO, an optimal equivalence ratio of 1.4 is found to yield the lowest combined emission.« less
Results from flamelet and non-flamelet models for supersonic combustion
NASA Astrophysics Data System (ADS)
Ladeinde, Foluso; Li, Wenhai
2017-11-01
Air-breathing propulsion systems (scramjets) have been identified as a viable alternative to rocket engines for improved efficiency. A scramjet engine, which operates at flight Mach numbers around 7 or above, is characterized by the existence of supersonic flow conditions in the combustor. In a dual-mode scramjet, this phenomenon is possible because of the relatively low value of the equivalence ratio and high stagnation temperature, which, together, inhibits thermal choking downstream of transverse injectors. The flamelet method has been our choice for turbulence-combustion interaction modeling and we have extended the basic approach in several dimensions, with a focus on the way the pressure and progress variable are modeled. Improved results have been obtained. We have also examined non-flamelet models, including laminar chemistry (QL), eddy dissipation concept (EDC), and partially-stirred reactor (PaSR). The pressure/progress variable-corrected simulations give better results compared with the original model, with reaction rates that are lower than those from EDC and PaSR. In general, QL tends to over-predict the reaction rate for the supersonic combustion problems investigated in our work.
Computational Analysis of Spray Jet Flames
NASA Astrophysics Data System (ADS)
Jain, Utsav
There is a boost in the utilization of renewable sources of energy but because of high energy density applications, combustion will never be obsolete. Spray combustion is a type of multiphase combustion which has tremendous engineering applications in different fields, varying from energy conversion devices to rocket propulsion system. Developing accurate computational models for turbulent spray combustion is vital for improving the design of combustors and making them energy efficient. Flamelet models have been extensively used for gas phase combustion because of their relatively low computational cost to model the turbulence-chemistry interaction using a low dimensional manifold approach. This framework is designed for gas phase non-premixed combustion and its implementation is not very straight forward for multiphase and multi-regime combustion such as spray combustion. This is because of the use of a conserved scalar and various flamelet related assumptions. Mixture fraction has been popularly employed as a conserved scalar and hence used to parameterize the characteristics of gaseous flamelets. However, for spray combustion, the mixture fraction is not monotonic and does not give a unique mapping in order to parameterize the structure of spray flames. In order to develop a flamelet type model for spray flames, a new variable called the mixing variable is introduced which acts as an ideal conserved scalar and takes into account the convection and evaporation of fuel droplets. In addition to the conserved scalar, it has been observed that though gaseous flamelets can be characterized by the conserved scalar and its dissipation, this might not be true for spray flamelets. Droplet dynamics has a significant influence on the spray flamelet and because of effects such as flame penetration of droplets and oscillation of droplets across the stagnation plane, it becomes important to accommodate their influence in the flamelet formulation. In order to recognize the droplet parameters needed, a rigorous parametric study is conducted for five different parameters in both physical as well as mixing variable space. The parametric study is conducted for a counterflow setup with n-heptane and inert nitrogen on the fuel side and oxygen with inert nitrogen on the oxidizer side. The computational setup (the temperature and velocity field) is validated against the experimental data from the Yale heptane counterflow flame. The five parameters that are investigated are: aerodynamic strain rate, initial droplet diameter, number of fuel droplets, droplet velocity slip ratio and pre-vaporization ratio. It is not the first time such a study has been accomplished but not a lot of research has been done for heavier fuels such as n-heptane (a very crucial reference fuel for the octane ratings in various applications). Also parameters such as droplet slip ratio and pre-vaporization ratio have not been prudently studied in the past. It is observed that though the slip ratio is not very significant in spray flamelet characterization, the pre-vaporization ratio is important to study and has an interesting influence on spray flamelet structure. In future, based on the current parametric study, the laminar spray flamelet library can be generated which will eventually be integrated to predict turbulent spray flames.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Saumyadip; Abraham, John
2012-07-01
The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.
The evolution equation for the flame surface density in turbulent premixed combustion
NASA Technical Reports Server (NTRS)
Trouve, Arnaud
1993-01-01
The mean reaction rate in flamelet models for turbulent premixed combustion depends on two basic quantities: a mean chemical rate, called the flamelet speed, and the flame surface density. Our previous work had been primarily focused on the problem of the structure and topology of turbulent premixed flames, and it was then determined that the flamelet speed, when space-averaged, is only weakly sensitive to the turbulent flow field. Consequently, the flame surface density is the key quantity that conveys most of the effects of the turbulence on the rate of energy release. In flamelet models, this quantity is obtained via a modeled transport equation called the Sigma-equation. Past theoretical work has produced a rigorous approach that leads to an exact but unclosed formulation for the turbulent Sigma-equation. In the exact Sigma-equation, it appears that the dynamical properties of the flame surface density are determined by a single parameter, namely the turbulent flame stretch. Unfortunately, the turbulent flame stretch as well as the flame surface density is not available from experiments, and, in the absence of experimental data, little is known on the validity of the closure assumptions used in current flamelet models. Direct Numerical Simulation (DNS) is the alternative approach to get basic information on these fundamental quantities. In the present work, three-dimensional DNS of premixed flames in isotropic turbulent flow is used to estimate the different terms appearing in the Sigma-equation. A new methodology is proposed to provide the source and sink terms for the flame surface density, resolved both temporally and spatially throughout the turbulent flame brush. Using this methodology, our objective is to extract the turbulent flame stretch from the DNS data base and then perform extensive comparisons with flamelet models. Thanks to the detailed information produced by the DNS-based analysis, it is expected that this type of comparison will not only underscore the shortcomings of current models, but also suggest ways to improve them.
NASA Astrophysics Data System (ADS)
Hu, Yong; Olguin, Hernan; Gutheil, Eva
2017-05-01
A spray flamelet/progress variable approach is developed for use in spray combustion with partly pre-vaporised liquid fuel, where a laminar spray flamelet library accounts for evaporation within the laminar flame structures. For this purpose, the standard spray flamelet formulation for pure evaporating liquid fuel and oxidiser is extended by a chemical reaction progress variable in both the turbulent spray flame model and the laminar spray flame structures, in order to account for the effect of pre-vaporised liquid fuel for instance through use of a pilot flame. This new approach is combined with a transported joint probability density function (PDF) method for the simulation of a turbulent piloted ethanol/air spray flame, and the extension requires the formulation of a joint three-variate PDF depending on the gas phase mixture fraction, the chemical reaction progress variable, and gas enthalpy. The molecular mixing is modelled with the extended interaction-by-exchange-with-the-mean (IEM) model, where source terms account for spray evaporation and heat exchange due to evaporation as well as the chemical reaction rate for the chemical reaction progress variable. This is the first formulation using a spray flamelet model considering both evaporation and partly pre-vaporised liquid fuel within the laminar spray flamelets. Results with this new formulation show good agreement with the experimental data provided by A.R. Masri, Sydney, Australia. The analysis of the Lagrangian statistics of the gas temperature and the OH mass fraction indicates that partially premixed combustion prevails near the nozzle exit of the spray, whereas further downstream, the non-premixed flame is promoted towards the inner rich-side of the spray jet since the pilot flame heats up the premixed inner spray zone. In summary, the simulation with the new formulation considering the reaction progress variable shows good performance, greatly improving the standard formulation, and it provides new insight into the local structure of this complex spray flame.
A Priori Analysis of Flamelet-Based Modeling for a Dual-Mode Scramjet Combustor
NASA Technical Reports Server (NTRS)
Quinlan, Jesse R.; McDaniel, James C.; Drozda, Tomasz G.; Lacaze, Guilhem; Oefelein, Joseph
2014-01-01
An a priori investigation of the applicability of flamelet-based combustion models to dual-mode scramjet combustion was performed utilizing Reynolds-averaged simulations (RAS). For this purpose, the HIFiRE Direct Connect Rig (HDCR) flowpath, fueled with a JP-7 fuel surrogate and operating in dual- and scram-mode was considered. The chemistry of the JP-7 fuel surrogate was modeled using a 22 species, 18-step chemical reaction mechanism. Simulation results were compared to experimentally-obtained, time-averaged, wall pressure measurements to validate the RAS solutions. The analysis of the dual-mode operation of this flowpath showed regions of predominately non-premixed, high-Damkohler number, combustion. Regions of premixed combustion were also present but associated with only a small fraction of the total heat-release in the flow. This is in contrast to the scram-mode operation, where a comparable amount of heat is released from non-premixed and premixed combustion modes. Representative flamelet boundary conditions were estimated by analyzing probability density functions for temperature and pressure for pure fuel and oxidizer conditions. The results of the present study reveal the potential for a flamelet model to accurately model the combustion processes in the HDCR and likely other high-speed flowpaths of engineering interest.
Near-Limit Flamelet Phenomena in Buoyant Low Stretch Diffusion Flames Beneath a Solid Fuel
NASA Technical Reports Server (NTRS)
Olson, S. L.; Tien, J. S.
2000-01-01
A unique near-limit low stretch multidimensional stable flamelet phenomena has been observed for the first time which extends the material flammability limit beyond the one-dimensional low stretch flammability limit to lower burning rates and higher relative heat losses than is possible with uniform flame coverage. During low stretch experiments burning the underside of very large radii (greater than or = 75 cm stretch rate less than or = 3/s) cylindrical cast PMMA samples, multidimensional flamelets were observed, in contrast with a one-dimensional flame that was found to blanket the surface for smaller radii samples ( higher stretch rate). Flamelets were observed by decreasing the stretch rate or by increasing the conductive heat loss from the flame. Flamelets are defined as flames that cover only part of the burning sample at any given time, but persist for many minutes. Flamelet phenomena is viewed as the flame's method of enhancing oxygen flow to the flame, through oxygen transport into the edges of the flamelet. Flamelets form as heat losses (surface radiation and solid-phase conduction) become large relative to the weakened heat release of the low stretch flame. While heat loss rates remain fairly constant, the limiting factor in the heat release of the flame is hypothesized to be the oxygen transport to the flame in this low stretch (low convective) environment. Flamelet extinction is frequently caused by encroachment of an adjacent flamelet. Large-scale whole-body flamelet oscillations at 1.2 - 1.95 Hz are noted prior to extinction of a flamelet. This oscillation is believed to be due a repeated process of excess fuel leakage through the dark channels between the flamelets, fuel premixing with slow incoming oxidizer, and subsequent rapid flame spread and retreat of the flamelet through the premixed layer. The oscillation frequency is driven by gas-phase diffusive time scales.
Full numerical simulation of coflowing, axisymmetric jet diffusion flames
NASA Technical Reports Server (NTRS)
Mahalingam, S.; Cantwell, B. J.; Ferziger, J. H.
1990-01-01
The near field of a non-premixed flame in a low speed, coflowing axisymmetric jet is investigated numerically using full simulation. The time-dependent governing equations are solved by a second-order, explicit finite difference scheme and a single-step, finite rate model is used to represent the chemistry. Steady laminar flame results show the correct dependence of flame height on Peclet number and reaction zone thickness on Damkoehler number. Forced simulations reveal a large difference in the instantaneous structure of scalar dissipation fields between nonbuoyant and buoyant cases. In the former, the scalar dissipation marks intense reaction zones, supporting the flamelet concept; however, results suggest that flamelet modeling assumptions need to be reexamined. In the latter, this correspondence breaks down, suggesting that modifications to the flamelet modeling approach are needed in buoyant turbulent diffusion flames.
Passive turbulent flamelet propagation
NASA Technical Reports Server (NTRS)
Ashurst, William T.; Ruetsch, G. R.; Lund, T. S.
1994-01-01
We analyze results of a premixed constant density flame propagating in three-dimensional turbulence, where a flame model developed by Kerstein, et al. (1988) has been used. Simulations with constant and evolving velocity fields are used, where peculiar results were obtained from the constant velocity field runs. Data from the evolving flow runs with various flame speeds are used to determine two-point correlations of the fluctuating scalar field and implications for flamelet modeling are discussed.
A flamelet model for transcritical LOx/GCH4 flames
NASA Astrophysics Data System (ADS)
Müller, Hagen; Pfitzner, Michael
2017-03-01
This work presents a numerical framework to efficiently simulate methane combustion at supercritical pressures. A LES flamelet approach is adapted to account for real-gas thermodynamics effects which are a prominent feature of flames at near-critical injection conditions. The thermodynamics model is based on the Peng-Robinson equation of state (PR-EoS) in conjunction with a novel volume-translation method to correct deficiencies in the transcritical regime. The resulting formulation is more accurate than standard cubic EoSs without deteriorating their good computational performance. To consistently account for pressure and strain fluctuations in the flamelet model, an additional enthalpy equation is solved along with the transport equations for mixture fraction and mixture fraction variance. The method is validated against available experimental data for a laboratory scale LOx/GCH4 flame at conditions that resemble those in liquid-propellant rocket engines. The LES result is in good agreement with the measured OH* radiation.
Study of Turbulent Premixed Flame Propagation using a Laminar Flamelet Model
NASA Technical Reports Server (NTRS)
Im, H. G.
1995-01-01
The laminar flamelet concept in turbulent reacting flows is considered applicable to many practical combustion systems (Linan & Williams 1993). For turbulent premixed combustion, the laminar flamelet regime is valid when turbulent Karlovitz number is less than unity, which is equivalent to stating that the characteristic thickness of the flame is less than that of a Kolmogorov eddy; this is known as the Klimov-Williams criterion (Williams 1985). In such a case, the flame maintains its laminar structure, and the effect of turbulent flow is merely to wrinkle and strain the flame front. The propagating wrinkled premixed flame can then be described as an infinitesimally thin surface dividing the unburnt fresh mixture and the burnt product.
NASA Technical Reports Server (NTRS)
Richardson, Brian; Kenny, Jeremy
2015-01-01
Injector design is a critical part of the development of a rocket Thrust Chamber Assembly (TCA). Proper detailed injector design can maximize propulsion efficiency while minimizing the potential for failures in the combustion chamber. Traditional design and analysis methods for hydrocarbon-fuel injector elements are based heavily on empirical data and models developed from heritage hardware tests. Using this limited set of data produces challenges when trying to design a new propulsion system where the operating conditions may greatly differ from heritage applications. Time-accurate, Three-Dimensional (3-D) Computational Fluid Dynamics (CFD) modeling of combusting flows inside of injectors has long been a goal of the fluid analysis group at Marshall Space Flight Center (MSFC) and the larger CFD modeling community. CFD simulation can provide insight into the design and function of an injector that cannot be obtained easily through testing or empirical comparisons to existing hardware. However, the traditional finite-rate chemistry modeling approach utilized to simulate combusting flows for complex fuels, such as Rocket Propellant-2 (RP-2), is prohibitively expensive and time consuming even with a large amount of computational resources. MSFC has been working, in partnership with Streamline Numerics, Inc., to develop a computationally efficient, flamelet-based approach for modeling complex combusting flow applications. In this work, a flamelet modeling approach is used to simulate time-accurate, 3-D, combusting flow inside a single Gas Centered Swirl Coaxial (GCSC) injector using the flow solver, Loci-STREAM. CFD simulations were performed for several different injector geometries. Results of the CFD analysis helped guide the design of the injector from an initial concept to a tested prototype. The results of the CFD analysis are compared to data gathered from several hot-fire, single element injector tests performed in the Air Force Research Lab EC-1 test facility located at Edwards Air Force Base.
Eulerian particle flamelet modeling of a bluff-body CH{sub 4}/H{sub 2} flame
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odedra, Anand; Malalasekera, W.
2007-11-15
In this paper an axisymmetric RANS simulation of a bluff-body stabilized flame has been attempted using steady and unsteady flamelet models. The unsteady effects are considered in a postprocessing manner through the Eulerian particle flamelet model (EPFM). In this model the transient history of scalar dissipation rate, conditioned by stoichiometric mixture fraction, is required to generate unsteady flamelets and is obtained by tracing Eulerian particles. In this approach unsteady convective-diffusive transport equations are solved to consider the transport of Eulerian particles in the domain. Comparisons of the results of steady and unsteady calculations show that transient effects do not havemore » much influence on major species, including OH, and the structure of the flame therefore can be successfully predicted by steady or unsteady approaches. However, it appears that slow processes such as NO formation can only be captured accurately if unsteady effects are taken into account, while steady simulations tend to overpredict NO. In this work turbulence has been modeled using the Reynolds stress model. Predictions of velocity, velocity rms, mean mixture fraction, and its rms show very good agreement with experiments. Performance of three detailed chemical mechanisms, the GRI Mech 2.11, the San Diego mechanism, and the GRI Mech 3.0, has also been evaluated in this study. All three mechanisms performed well with both steady and unsteady approaches and produced almost identical results for major species and OH. However, the difference between mechanisms and flamelet models becomes clearly apparent in the NO predictions. The unsteady model incorporating the GRI Mech 2.11 provided better predictions of NO than steady calculations and showed close agreement with experiments. The other two mechanisms showed overpredictions of NO with both unsteady and steady models. The level of overprediction is severe with the steady approach. GRI Mech 3.0 appears to overpredict NO by a factor of 2 compared to GRI Mech 2.11. The NO predictions by the San Diego mechanism fall between those of the two GRI mechanisms. The present study demonstrates the success of the EPFM model and when used with the GRI 2.11 mechanism predicts all flame properties and major and minor species very well, and most importantly the correct NO levels. (author)« less
NASA Astrophysics Data System (ADS)
Ruan, Shaohong; Swaminathan, Nedunchezhian; Darbyshire, Oliver
2014-03-01
This study focuses on the modelling of turbulent lifted jet flames using flamelets and a presumed Probability Density Function (PDF) approach with interest in both flame lift-off height and flame brush structure. First, flamelet models used to capture contributions from premixed and non-premixed modes of the partially premixed combustion in the lifted jet flame are assessed using a Direct Numerical Simulation (DNS) data for a turbulent lifted hydrogen jet flame. The joint PDFs of mixture fraction Z and progress variable c, including their statistical correlation, are obtained using a copula method, which is also validated using the DNS data. The statistically independent PDFs are found to be generally inadequate to represent the joint PDFs from the DNS data. The effects of Z-c correlation and the contribution from the non-premixed combustion mode on the flame lift-off height are studied systematically by including one effect at a time in the simulations used for a posteriori validation. A simple model including the effects of chemical kinetics and scalar dissipation rate is suggested and used for non-premixed combustion contributions. The results clearly show that both Z-c correlation and non-premixed combustion effects are required in the premixed flamelets approach to get good agreement with the measured flame lift-off heights as a function of jet velocity. The flame brush structure reported in earlier experimental studies is also captured reasonably well for various axial positions. It seems that flame stabilisation is influenced by both premixed and non-premixed combustion modes, and their mutual influences.
NASA Astrophysics Data System (ADS)
Consalvi, Jean-Louis
2017-01-01
The time-averaged Radiative Transfer Equation (RTE) introduces two unclosed terms, known as `absorption Turbulence Radiation Interaction (TRI)' and `emission TRI'. Emission TRI is related to the non-linear coupling between fluctuations of the absorption coefficient and fluctuations of the Planck function and can be described without introduction any approximation by using a transported PDF method. In this study, a hybrid flamelet/ Stochastic Eulerian Field Model is used to solve the transport equation of the one-point one-time PDF. In this formulation, the steady laminar flamelet model (SLF) is coupled to a joint Probability Density Function (PDF) of mixture fraction, enthalpy defect, scalar dissipation rate, and soot quantities and the PDF transport equation is solved by using a Stochastic Eulerian Field (SEF) method. Soot production is modeled by a semi-empirical model and the spectral dependence of the radiatively participating species, namely combustion products and soot, are computed by using a Narrow Band Correlated-k (NBCK) model. The model is applied to simulate an ethylene/methane turbulent jet flame burning in an oxygen-enriched environment. Model results are compared with the experiments and the effects of taken into account Emission TRI on flame structure, soot production and radiative loss are discussed.
Statistics for laminar flamelet modeling
NASA Technical Reports Server (NTRS)
Cant, R. S.; Rutland, C. J.; Trouve, A.
1990-01-01
Statistical information required to support modeling of turbulent premixed combustion by laminar flamelet methods is extracted from a database of the results of Direct Numerical Simulation of turbulent flames. The simulations were carried out previously by Rutland (1989) using a pseudo-spectral code on a three dimensional mesh of 128 points in each direction. One-step Arrhenius chemistry was employed together with small heat release. A framework for the interpretation of the data is provided by the Bray-Moss-Libby model for the mean turbulent reaction rate. Probability density functions are obtained over surfaces of the constant reaction progress variable for the tangential strain rate and the principal curvature. New insights are gained which will greatly aid the development of modeling approaches.
Tabulated Combustion Model Development For Non-Premixed Flames
NASA Astrophysics Data System (ADS)
Kundu, Prithwish
Turbulent non-premixed flames play a very important role in the field of engineering ranging from power generation to propulsion. The coupling of fluid mechanics and complicated combustion chemistry of fuels pose a challenge for the numerical modeling of these type of problems. Combustion modeling in Computational Fluid Dynamics (CFD) is one of the most important tools used for predictive modeling of complex systems and to understand the basic fundamentals of combustion. Traditional combustion models solve a transport equation of each species with a source term. In order to resolve the complex chemistry accurately it is important to include a large number of species. However, the computational cost is generally proportional to the cube of number of species. The presence of a large number of species in a flame makes the use of CFD computationally expensive and beyond reach for some applications or inaccurate when solved with simplified chemistry. For highly turbulent flows, it also becomes important to incorporate the effects of turbulence chemistry interaction (TCI). The aim of this work is to develop high fidelity combustion models based on the flamelet concept and to significantly advance the existing capabilities. A thorough investigation of existing models (Finite-rate chemistry and Representative Interactive Flamelet (RIF)) and comparative study of combustion models was done initially on a constant volume combustion chamber with diesel fuel injection. The CFD modeling was validated with experimental results and was also successfully applied to a single cylinder diesel engine. The effect of number of flamelets on the RIF model and flamelet initialization strategies were studied. The RIF model with multiple flamelets is computationally expensive and a model was proposed on the frame work of RIF. The new model was based on tabulated chemistry and incorporated TCI effects. A multidimensional tabulated chemistry database generation code was developed based on the 1D diffusion flame solver. The proposed model did not use progress variables like the traditional chemistry tabulation methods. The resulting model demonstrated an order of magnitude computational speed up over the RIF model. The results were validated across a wide range of operating conditions for diesel injections and the results were in close agreement to those of the experimental data. History of scalar dissipation rates plays a very important role in non premixed flames. However, tabulated methods have not been able to incorporate this physics in their models. A comparative approach is developed that can quantify these effects and find correlations with flow variables. A new model is proposed to include these effects in tabulated combustion models. The model is initially validated for 1D counterflow diffusion flame problems at engine conditions. The model is further implemented and validated in a 3D RANS code across a range of operating conditions for spray flames.
Investigation of combustion characteristics in a scramjet combustor using a modified flamelet model
NASA Astrophysics Data System (ADS)
Zhao, Guoyan; Sun, Mingbo; Wang, Hongbo; Ouyang, Hao
2018-07-01
In this study, the characteristics of supersonic combustion inside an ethylene-fueled scramjet combustor equipped with multi-cavities were investigated with different injection schemes. Experimental results showed that the flames concentrated in the cavity and separated boundary layer downstream of the cavity, and they occupied the flow channel further enhancing the bulk flow compression. The flame structure in distributed injection scheme differed from that in centralized injection scheme. In numerical simulations, a modified flamelet model was introduced to consider that the pressure distribution is far from homogenous inside the scramjet combustor. Compared with original flamelet model, numerical predictions based on the modified model showed better agreement with the experimental results, validating the reliability of the calculations. Based on the modified model, the simulations with different injection schemes were analysed. The predicted flame agreed reasonably with the experimental observations in structure. The CO masses were concentrated in cavity and subsonic region adjacent to the cavity shear layer leading to intense heat release. Compared with centralized scheme, the higher jet mixing efficiency in distributed scheme induced an intense combustion in posterior upper cavity and downstream of the cavity. From streamline and isolation surfaces, the combustion at trail of lower cavity was depressed since the bulk flow downstream of the cavity is pushed down.
Evaluation of different flamelet tabulation methods for laminar spray combustion
NASA Astrophysics Data System (ADS)
Luo, Yujuan; Wen, Xu; Wang, Haiou; Luo, Kun; Fan, Jianren
2018-05-01
In this work, three different flamelet tabulation methods for spray combustion are evaluated. Major differences among these methods lie in the treatment of the temperature boundary conditions of the flamelet equations. Particularly, in the first tabulation method ("M1"), both the fuel and oxidizer temperature boundary conditions are set to be fixed. In the second tabulation method ("M2"), the fuel temperature boundary condition is varied while the oxidizer temperature boundary condition is fixed. In the third tabulation method ("M3"), both the fuel and oxidizer temperature boundary conditions are varied and set to be equal. The focus of this work is to investigate whether the heat transfer between the droplet phase and gas phase can be represented by the studied tabulation methods through a priori analyses. To this end, spray flames stabilized in a three-dimensional counterflow are first simulated with detailed chemistry. Then, the trajectory variables are calculated from the detailed chemistry solutions. Finally, the tabulated thermo-chemical quantities are compared to the corresponding values from the detailed chemistry solutions. The comparisons show that the gas temperature cannot be predicted by "M1" with only a mixture fraction and reaction progress variable being the trajectory variables. The gas temperature can be correctly predicted by both "M2" and "M3," in which the total enthalpy is introduced as an additional manifold. In "M2," variations of the oxidizer temperature are considered with a temperature modification technique, which is not required in "M3." Interestingly, it is found that the mass fractions of the reactants and major products are not sensitive to the representation of the interphase heat transfer in the flamelet chemtables, and they can be correctly predicted by all tabulation methods. By contrast, the intermediate species CO and H2 in the premixed flame reaction zone are over-predicted by all tabulation methods.
NASA Astrophysics Data System (ADS)
Hernandez Perez, Francisco E.; Im, Hong G.; Lee, Bok Jik; Fancello, Alessio; Donini, Andrea; van Oijen, Jeroen A.; de Goey, L. Philip H.
2017-11-01
Large eddy simulations (LES) of a turbulent premixed jet flame in a confined chamber are performed employing the flamelet-generated manifold (FGM) method for tabulation of chemical kinetics and thermochemical properties, as well as the OpenFOAM framework for computational fluid dynamics. The burner has been experimentally studied by Lammel et al. (2011) and features an off-center nozzle, feeding a preheated lean methane-air mixture with an equivalence ratio of 0.71 and mean velocity of 90 m/s, at 573 K and atmospheric pressure. Conductive heat loss is accounted for in the FGM tabulation via burner-stabilized flamelets and the subgrid-scale (SGS) turbulence-chemistry interaction is modeled via presumed filtered density functions. The impact of heat loss inclusion as well as SGS modeling for both the SGS stresses and SGS variance of progress variable on the numerical results is investigated. Comparisons of the LES results against measurements show a significant improvement in the prediction of temperature when heat losses are incorporated into FGM. While further enhancements in the LES results are accomplished by using SGS models based on transported quantities and/or dynamically computed coefficients as compared to the Smagorinsky model, heat loss inclusion is more relevant. This research was sponsored by King Abdullah University of Science and Technology (KAUST) and made use of computational resources at KAUST Supercomputing Laboratory.
NASA Astrophysics Data System (ADS)
Saghafian, Amirreza; Pitsch, Heinz
2012-11-01
A compressible flamelet/progress variable approach (CFPV) has been devised for high-speed flows. Temperature is computed from the transported total energy and tabulated species mass fractions and the source term of the progress variable is rescaled with pressure and temperature. The combustion is thus modeled by three additional scalar equations and a chemistry table that is computed in a pre-processing step. Three-dimensional direct numerical simulation (DNS) databases of reacting supersonic turbulent mixing layer with detailed chemistry are analyzed to assess the underlying assumptions of CFPV. Large eddy simulations (LES) of the same configuration using the CFPV method have been performed and compared with the DNS results. The LES computations are based on the presumed subgrid PDFs of mixture fraction and progress variable, beta function and delta function respectively, which are assessed using DNS databases. The flamelet equation budget is also computed to verify the validity of CFPV method for high-speed flows.
Premixed Edge-Flames in Spatially-Varying Straining Flows
NASA Technical Reports Server (NTRS)
Liu, Jian-Bang; Ronney, Paul D.
1999-01-01
Flames subject to temporally and spatially uniform hydrodynamic strain are frequently used to model the local interactions of flame fronts with turbulent flow fields (Williams, 1985; Peters, 1986; Bradley, 1992). The applicability of laminar flamelet models in strongly turbulent flows have been questioned recently (Shay and Ronney, 1998) because in turbulent flows the strain rate (sigma) changes at rates comparable to sigma itself and the scale over which the flame front curvature and sigma changes is comparable to the curvature scale itself. Therefore quasi-static, local models of turbulent strain and curvature effects on laminar flamelets may not be accurate under conditions where the strain and curvature effects are most significant. The purpose of this study is to examine flames in spatially-varying strain and compare their properties to those of uniformly strained flames.
NASA Astrophysics Data System (ADS)
Wen, Xu; Luo, Kun; Jin, Hanhui; Fan, Jianren
2017-09-01
An extended flamelet/progress variable (EFPV) model for simulating pulverised coal combustion (PCC) in the context of large eddy simulation (LES) is proposed, in which devolatilisation, char surface reaction and radiation are all taken into account. The pulverised coal particles are tracked in the Lagrangian framework with various sub-models and the sub-grid scale (SGS) effects of turbulent velocity and scalar fluctuations on the coal particles are modelled by the velocity-scalar joint filtered density function (VSJFDF) model. The presented model is then evaluated by LES of an experimental piloted coal jet flame and comparing the numerical results with the experimental data and the results from the eddy break up (EBU) model. Detailed quantitative comparisons are carried out. It is found that the proposed model performs much better than the EBU model on radial velocity and species concentrations predictions. Comparing against the adiabatic counterpart, we find that the predicted temperature is evidently lowered and agrees well with the experimental data if the conditional sampling method is adopted.
Technology for Transient Simulation of Vibration during Combustion Process in Rocket Thruster
NASA Astrophysics Data System (ADS)
Zubanov, V. M.; Stepanov, D. V.; Shabliy, L. S.
2018-01-01
The article describes the technology for simulation of transient combustion processes in the rocket thruster for determination of vibration frequency occurs during combustion. The engine operates on gaseous propellant: oxygen and hydrogen. Combustion simulation was performed using the ANSYS CFX software. Three reaction mechanisms for the stationary mode were considered and described in detail. The way for obtaining quick CFD-results with intermediate combustion components using an EDM model was found. The way to generate the Flamelet library with CFX-RIF was described. A technique for modeling transient combustion processes in the rocket thruster was proposed based on the Flamelet library. A cyclic irregularity of the temperature field like vortex core precession was detected in the chamber. Frequency of flame precession was obtained with the proposed simulation technique.
Modeling and calculation of turbulent lifted diffusion flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, J.P.H.; Lamers, A.P.G.G.
1994-01-01
Liftoff heights of turbulent diffusion flames have been modeled using the laminar diffusion flamelet concept of Peters and Williams. The strain rate of the smallest eddies is used as the stretch describing parameter, instead of the more common scalar dissipation rate. The h(U) curve, which is the mean liftoff height as a function of fuel exit velocity can be accurately predicted, while this was impossible with the scalar dissipation rate. Liftoff calculations performed in the flames as well as in the equivalent isothermal jets, using a standard k-[epsilon] turbulence model yield approximately the same correct slope for the h(U) curvemore » while the offset has to be reproduced by choosing an appropriate coefficient in the strain rate model. For the flame calculations a model for the pdf of the fluctuating flame base is proposed. The results are insensitive to its width. The temperature field is qualitatively different from the field calculated by Bradley et al. who used a premixed flamelet model for diffusion flames.« less
NASA Technical Reports Server (NTRS)
Mantel, T.
1993-01-01
Although the different regimes of premixed combustion are not well defined, most of the recent developments in turbulent combustion modeling are led in the so-called flamelet regime. The goal of these models is to give a realistic expression to the mean reaction rate (w). Several methods can be used to estimate (w). Bray and coworkers (Libby & Bray 1980, Bray 1985, Bray & Libby 1986) express the instantaneous reaction rate by means of a flamelet library and a frequency which describes the local interaction between the laminar flamelets and the turbulent flowfield. In another way, the mean reaction rate can be directly connected to the flame surface density (Sigma). This quantity can be given by the transport equation of the coherent flame model initially proposed by Marble & Broadwell 1977 and developed elsewhere. The mean reaction rate, (w), can also be estimated thanks to the evolution of an arbitrary scalar field G(x, t) = G(sub O) which represents the flame sheet. G(x, t) is obtained from the G-equation proposed by Williams 1985, Kerstein et al. 1988 and Peters 1993. Another possibility proposed in a recent study by Mantel & Borghi 1991, where a transport equation for the mean dissipation rate (epsilon(sub c)) of the progress variable c is used to determine (w). In their model, Mantel & Borghi 1991 considered a medium with constant density and constant diffusivity in the determination of the transport equation for (epsilon(sub c)). A comparison of different flamelet models made by Duclos et al. 1993 shows the realistic behavior of this model even in the case of constant density. Our objective in this present report is to present preliminary results on the study of this equation in the case of variable density and variable diffusivity. Assumptions of constant pressure and a Lewis number equal to unity allow us to significantly simplify the equation. A systematic order of magnitude analysis based on adequate scale relations is performed on each term of the equation. As in the case of constant density and constant diffusivity, the effects of stretching of the scalar field by the turbulent strain field, of local curvature, and of chemical reactions are predominant. In this preliminary work, we suggest closure models for certain terms, which will be validated after comparisons with DNS data.
Turbulent Radiation Effects in HSCT Combustor Rich Zone
NASA Technical Reports Server (NTRS)
Hall, Robert J.; Vranos, Alexander; Yu, Weiduo
1998-01-01
A joint UTRC-University of Connecticut theoretical program was based on describing coupled soot formation and radiation in turbulent flows using stretched flamelet theory. This effort was involved with using the model jet fuel kinetics mechanism to predict soot growth in flamelets at elevated pressure, to incorporate an efficient model for turbulent thermal radiation into a discrete transfer radiation code, and to couple die soot growth, flowfield, and radiation algorithm. The soot calculations used a recently developed opposed jet code which couples the dynamical equations of size-class dependent particle growth with complex chemistry. Several of the tasks represent technical firsts; among these are the prediction of soot from a detailed jet fuel kinetics mechanism, the inclusion of pressure effects in the soot particle growth equations, and the inclusion of the efficient turbulent radiation algorithm in a combustor code.
A mixing timescale model for TPDF simulations of turbulent premixed flames
Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.; ...
2017-02-06
Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less
A mixing timescale model for TPDF simulations of turbulent premixed flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.
Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less
The evolution equation for the flame surface density in turbulent premixed combustion
NASA Technical Reports Server (NTRS)
Trouve, A.; Poinsot, T.
1992-01-01
One central ingredient in flamelet models for turbulent premixed combustion is the flame surface density. This quantity conveys most of the effects of the turbulence on the rate of energy release and is obtained via a modeled transport equation, called the Sigma-equation. Past theoretical work has produced a rigorous approach that leads to an exact, but unclosed, formulation for the turbulent Sigma-equation. In this exact Sigma-equation, it appears that the dynamical properties of the flame surface density are determined by a single parameter, namely the turbulent flame stretch. Unfortunately, the flame surface density and the turbulent flame stretch are not available from experiments and, in the absence of experimental data, little is known on the validity of the closure assumptions used in current flamelet models. Direct Numerical Simulation (DNS) is the obvious, complementary approach to get basic information on these fundamental quantities. Three-dimensional DNS of premixed flames in isotropic turbulent flow is used to estimate the different terms appearing in the Sigma-equation. A new methodology is proposed to provide the source and sink terms for the flame surface density, resolved both temporally and spatially throughout the turbulent flame brush. Using this methodology, the effects of the Lewis number on the rate of production of flame surface area are described in great detail and meaningful comparisons with flamelet models can be performed. The analysis reveals in particular the tendency of the models to overpredict flame surface dissipation as well as their inability to reproduce variations due to thermo-diffusive phenomena. Thanks to the detailed information produced by a DNS-based analysis, this type of comparison not only underscores the shortcomings of current models but also suggests ways to improve them.
Assessment of the Eulerian particle flamelet model for nonpremixed turbulent jet flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seong-Ku; Kim, Yongmo
2008-07-15
Although the Eulerian particle flamelet model (EPFM) recently proposed by Barths et al. [Proc. Combust. Inst. 27 (1998) 1841-1847] has shown the potential capabilities to realistically predict detailed pollutant (NO{sub x}, soot) formation in a turbulent reacting flow occurring within practical combustion devices, there still exists room to improve the predicative capability in terms of local flame structure and turbulence-chemistry interaction. In this study, the EPFM approach was applied to simulate two turbulent nonpremixed jet flames of CO/H{sub 2}/N{sub 2} fuel having the same jet Reynolds number but different nozzle diameters, and the capability of predicting the NO{sub x} formationmore » as well as both similarity of major species and sensitivity of minor species to fluid-dynamic scaling for the two flames has been assessed deeply in terms of both conditional and unconditional mean structures. The present results indicate that the original EPFM substantially overpredicts the conditional scalar dissipation rate at the downstream region and consequently underpredicts the streamwise decay of superequilibrium radical concentrations to the equilibrium state. In this study, in order to correctly estimate the averaged conditional scalar dissipation rate, a new modeling of the conditional scalar dissipation rate based on a least-squares fit through a mass weighted spatial distribution has been devised. In terms of both conditional and unconditional means, the EPFM utilizing this new procedure yields nearly the same results as the Lagrangian flamelet model, and provides closer agreement with experimental data than the original EPFM approach. (author)« less
New Species of Fire Discovered: Fingering Flamelets Form a Dynamic Population
NASA Technical Reports Server (NTRS)
Olson, Sandra L.; Miller, Fletcher J.; Wichman, Indrek S.
2005-01-01
Poets and artists have long used fire as a metaphor for life. At the NASA Glenn Research Center, recent experiments in a subcritical Rayleigh number flow channel demonstrated that this analogy holds up surprisingly well when tools developed to characterize a biological population are applied to a class of fire that occurs in near-extinction, weakly convective environments (such as microgravity) or in vertically confined spaces (such as our apparatus). Under these conditions, the flame breaks into numerous 'flamelets" that form a Turing-type reaction-diffusion fingering pattern as they spread across the fuel. It is standard practice on U.S. spacecraft for the astronaut crew to turn off the ventilation to help extinguish a fire, both to eliminate the fresh oxygen supply and to reduce the distribution of the smoke. When crew members think that the fire is fully extinguished, they reactivate the ventilation system to clear the smoke. However, some flamelets can survive, and our experiments have demonstrated that flamelets quickly grow into a large fire when ventilation increases.
NASA Astrophysics Data System (ADS)
Donini, A.; Martin, S. M.; Bastiaans, R. J. M.; van Oijen, J. A.; de Goey, L. P. H.
2013-10-01
In the present paper a computational analysis of a high pressure confined premixed turbulent methane/air jet flames is presented. In this scope, chemistry is reduced by the use of the Flamelet Generated Manifold method [1] and the fluid flow is modeled in an LES and RANS context. The reaction evolution is described by the reaction progress variable, the heat loss is described by the enthalpy and the turbulence effect on the reaction is represented by the progress variable variance. The interaction between chemistry and turbulence is considered through a presumed probability density function (PDF) approach. The use of FGM as a combustion model shows that combustion features at gas turbine conditions can be satisfactorily reproduced with a reasonable computational effort. Furthermore, the present analysis indicates that the physical and chemical processes controlling carbon monoxide (CO) emissions can be captured only by means of unsteady simulations.
Design and Fabrication of a Hele-Shaw Apparatus for Observing Instabilities of Diffusion Flames
NASA Technical Reports Server (NTRS)
Wichman, I. S.; Oravecz-Simpkins, L.; Olson, S.
2001-01-01
Examinations of flame fronts spreading over solid fuels in an opposed flow of oxidizer have shown that the flame front fragments into smaller (cellular) flames. These 'flamelets' will oscillate, recombine, or extinguish, indicating that they are in the near extinction limit regime (i.e., to one side of the quenching branch of the flammability map). Onset of unstable cellular flamelet formation for flame spread over thin fuels occurs when a heat-sink substrate is placed a small distance from the underside of the fuel. This heat-sink substrate (or backing) displaces the quenching branch of the flammability map in a direction that causes the instabilities to occur at higher air velocities. Similar near-limit behavior has been observed in other works using different fuels, thus suggesting that these dynamic mechanisms are fuel-independent and therefore fundamental attributes of flames in this near-limit flame spread regime. The objective of this project is to determine the contributions of the hydrodynamic and thermodiffusive mechanisms to the observed formation of flame instabilities. From this, a model of diffusion flame instabilities shall be generated. Previously, experiments were conducted in NASA drop towers, thereby limiting observation time to O(1-5 sec). The NASA tests exhibited flamelet survival for the entire drop time, suggesting that flamelets (i.e., small cellular flames) might exist, if permitted, for longer time periods. By necessity, experiments were limited to thermally thin cellulose fuels (approximately 0.001 in thick): instabilities could form by virtue of faster spread rates over thin fuels. Unstable behavior was unlikely in the short drop time for thicker fuels. In the International Space Station (ISS), microgravity time is unlimited, so both thin and thick fuels can be tested.
Evaluation of flamelet/progress variable model for laminar pulverized coal combustion
NASA Astrophysics Data System (ADS)
Wen, Xu; Wang, Haiou; Luo, Yujuan; Luo, Kun; Fan, Jianren
2017-08-01
In the present work, the flamelet/progress variable (FPV) approach based on two mixture fractions is formulated for pulverized coal combustion and then evaluated in laminar counterflow coal flames under different operating conditions through both a priori and a posteriori analyses. Two mixture fractions, Zvol and Zchar, are defined to characterize the mixing between the oxidizer and the volatile matter/char reaction products. A coordinate transformation is conducted to map the flamelet solutions from a unit triangle space (Zvol, Zchar) to a unit square space (Z, X) so that a more stable solution can be achieved. To consider the heat transfers between the coal particle phase and the gas phase, the total enthalpy is introduced as an additional manifold. As a result, the thermo-chemical quantities are parameterized as a function of the mixture fraction Z, the mixing parameter X, the normalized total enthalpy Hnorm, and the reaction progress variable YPV. The validity of the flamelet chemtable and the selected trajectory variables is first evaluated in a priori tests by comparing the tabulated quantities with the results obtained from numerical simulations with detailed chemistry. The comparisons show that the major species mass fractions can be predicted by the FPV approach in all combustion regions for all operating conditions, while the CO and H2 mass fractions are over-predicted in the premixed flame reaction zone. The a posteriori study shows that overall good agreement between the FPV results and those obtained from detailed chemistry simulations can be achieved, although the coal particle ignition is predicted to be slightly earlier. Overall, the validity of the FPV approach for laminar pulverized coal combustion is confirmed and its performance in turbulent pulverized coal combustion will be tested in future work.
NASA Astrophysics Data System (ADS)
Bojko, Brian T.
Accounting for the effects of finite rate chemistry in reacting flows is intractable when considering the number of species and reactions to be solved for during a large scale flow simulation. This is especially complicated when solid/liquid fuels are also considered. While modeling the reacting boundary layer with the use of finite-rate chemistry may allow for a highly accurate description of the coupling between the flame and fuel surface, it is not tractable in large scale simulations when considering detailed chemical kinetics. It is the goal of this research to investigate a Flamelet-Generated Manifold (FGM) method in order to reduce the finite rate chemistry to a lookup table cataloged by progress variables and queried during runtime. In this study, simplified unsteady 1D flames with mass blowing are considered for a solid biomass fuel where the FGM method is employed as a model reduction strategy for potential application to multidimensional calculations. Two types of FGM are considered. The first are a set of steady-state flames differentiated by their scalar dissipation rate. Results show the use of steady flames produce unacceptable errors compared to the finite-rate chemistry solution, with temperature errors in excess of 45%. To avoid these errors, a new methodology for developing an unsteady FGM (UFGM) is presented that accounts for unsteady diffusion effects and greatly reduces errors in temperature with differences that are under 10%. The FGM modeling is then extended to individual droplet combustion with the development of a Droplet Flamelet-Generated Manifold (DFGM) to account for the effects of finite-rate chemistry of individual droplets. A spherically symmetric droplet model is developed for methanol and aluminum. The inclusion of finite-rate chemistry allows the capturing of the transition from diffusion to kinetically controlled combustion as the droplet diameter decreases. The droplet model is then used to create a DFGM by successively solving the 1D flame equations at varying drop sizes, where the source terms for energy, mixture fraction, and progress variable are cataloged as a function of normalized diameter. A unique coupling of the DFGM and planar UFGM is developed and is used to account for individual and gas phase combustion processes in turbulent combustion situations, such as spray flames, particle laden blasts, etc. The DFGM for the methanol and aluminum droplets are used in mixed Eulerian and Eulerian-Lagrangian formulations of compressible multiphase flows. System level simulations are conducted and compared experimental data for a methanol spray flame and an aluminized blast studied at the Explosives Components Facility (ECF) at Sandia National Laboratories.
NASA Astrophysics Data System (ADS)
Ulitsky, Mark
1997-11-01
A model for premixed turbulent combustion in the so called 'flamelet regime' has been developed. This regime, often referred to as the fast chemistry or high Damkohler number regime, is characterized by turbulent length and time scales that are much larger and slower than the flame thickness and reaction time scales respectively. There is currently great interest in trying to better understand flamelet combustion, as many practical devices (i.e., spark ignition engines, gas turbines, etc.) have been found to operate in this regime. Before a model could be developed however, it was first necessary to ascertain which part of the turbulence (either the nearly Gaussian background turbulence or the tube-like coherent vortical structures) was responsible for the multi-scale wrinkling of the flame surface. This question motivated a DNS study of flames passing through both structure containing the structure free isotropic turbulence. After it was determined that the presence of the coherent structures was merely ancillary in terms of increasing the surface area of the flame, a spectral model based on the EDQNM (Eddy Damped Quasi Normal Markovian) theory of turbulence was developed. This theory implicitly assumes that joint distributions of the fluctuating velocity components are nearly Gaussian, and as only spectra are transported in this model, there is no direct information about any of the coherent structures which might be embedded within the flow field. One of the advantages of this model is that both the Reynolds number and the ratio of the rms fluctuating velocity to the laminar flame speed can be varied independently. To test the model's ability to capture the nonlinear dynamics of the governing field equation a DNS study was performed and both steady-state and transient single- and two-point statistics were compared. Finally, the model was compared to two-point experimental measurements taken from a lean premixed methane-air flame.
Turbulent flame spreading mechanisms after spark ignition
NASA Astrophysics Data System (ADS)
Subramanian, V.; Domingo, Pascale; Vervisch, Luc
2009-12-01
Numerical simulation of forced ignition is performed in the framework of Large-Eddy Simulation (LES) combined with a tabulated detailed chemistry approach. The objective is to reproduce the flame properties observed in a recent experimental work reporting probability of ignition in a laboratory-scale burner operating with Methane/air non premixed mixture [1]. The smallest scales of chemical phenomena, which are unresolved by the LES grid, are approximated with a flamelet model combined with presumed probability density functions, to account for the unresolved part of turbulent fluctuations of species and temperature. Mono-dimensional flamelets are simulated using GRI-3.0 [2] and tabulated under a set of parameters describing the local mixing and progress of reaction. A non reacting case was simulated at first, to study the unsteady velocity and mixture fields. The time averaged velocity and mixture fraction, and their respective turbulent fluctuations, are compared against the experimental measurements, in order to estimate the prediction capabilities of LES. The time history of axial and radial components of velocity and mixture fraction is cumulated and analysed for different burner regimes. Based on this information, spark ignition is mimicked on selected ignition spots and the dynamics of kernel development analyzed to be compared against the experimental observations. The possible link between the success or failure of the ignition and the flow conditions (in terms of velocity and composition) at the sparking time are then explored.
An Investigation of a Hybrid Mixing Model for PDF Simulations of Turbulent Premixed Flames
NASA Astrophysics Data System (ADS)
Zhou, Hua; Li, Shan; Wang, Hu; Ren, Zhuyin
2015-11-01
Predictive simulations of turbulent premixed flames over a wide range of Damköhler numbers in the framework of Probability Density Function (PDF) method still remain challenging due to the deficiency in current micro-mixing models. In this work, a hybrid micro-mixing model, valid in both the flamelet regime and broken reaction zone regime, is proposed. A priori testing of this model is first performed by examining the conditional scalar dissipation rate and conditional scalar diffusion in a 3-D direct numerical simulation dataset of a temporally evolving turbulent slot jet flame of lean premixed H2-air in the thin reaction zone regime. Then, this new model is applied to PDF simulations of the Piloted Premixed Jet Burner (PPJB) flames, which are a set of highly shear turbulent premixed flames and feature strong turbulence-chemistry interaction at high Reynolds and Karlovitz numbers. Supported by NSFC 51476087 and NSFC 91441202.
Using the tabulated diffusion flamelet model ADF-PCM to simulate a lifted methane-air jet flame
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michel, Jean-Baptiste; Colin, Olivier; Angelberger, Christian
2009-07-15
Two formulations of a turbulent combustion model based on the approximated diffusion flame presumed conditional moment (ADF-PCM) approach [J.-B. Michel, O. Colin, D. Veynante, Combust. Flame 152 (2008) 80-99] are presented. The aim is to describe autoignition and combustion in nonpremixed and partially premixed turbulent flames, while accounting for complex chemistry effects at a low computational cost. The starting point is the computation of approximate diffusion flames by solving the flamelet equation for the progress variable only, reading all chemical terms such as reaction rates or mass fractions from an FPI-type look-up table built from autoigniting PSR calculations using complexmore » chemistry. These flamelets are then used to generate a turbulent look-up table where mean values are estimated by integration over presumed probability density functions. Two different versions of ADF-PCM are presented, differing by the probability density functions used to describe the evolution of the stoichiometric scalar dissipation rate: a Dirac function centered on the mean value for the basic ADF-PCM formulation, and a lognormal function for the improved formulation referenced ADF-PCM{chi}. The turbulent look-up table is read in the CFD code in the same manner as for PCM models. The developed models have been implemented into the compressible RANS CFD code IFP-C3D and applied to the simulation of the Cabra et al. experiment of a lifted methane jet flame [R. Cabra, J. Chen, R. Dibble, A. Karpetis, R. Barlow, Combust. Flame 143 (2005) 491-506]. The ADF-PCM{chi} model accurately reproduces the experimental lift-off height, while it is underpredicted by the basic ADF-PCM model. The ADF-PCM{chi} model shows a very satisfactory reproduction of the experimental mean and fluctuating values of major species mass fractions and temperature, while ADF-PCM yields noticeable deviations. Finally, a comparison of the experimental conditional probability densities of the progress variable for a given mixture fraction with model predictions is performed, showing that ADF-PCM{chi} reproduces the experimentally observed bimodal shape and its dependency on the mixture fraction, whereas ADF-PCM cannot retrieve this shape. (author)« less
Grid Resolution Effects on LES of a Piloted Methane-Air Flame
2009-05-20
respectively. In the LES momen- tum equation , Eq.(3), the Smagorinsky model is used to obtain the deviatoric part of the unclosed SGS stress τi j... accurately predicted from integra- tion of their LES evolution equations ; and (ii), the flamelet parametrization should adequately approximate the... effect of the complex small-scale turbulence/chemistry interactions is modeled in an affordable way by a combustion model. A question of how a particular
Modeling of turbulent chemical reaction
NASA Technical Reports Server (NTRS)
Chen, J.-Y.
1995-01-01
Viewgraphs are presented on modeling turbulent reacting flows, regimes of turbulent combustion, regimes of premixed and regimes of non-premixed turbulent combustion, chemical closure models, flamelet model, conditional moment closure (CMC), NO(x) emissions from turbulent H2 jet flames, probability density function (PDF), departures from chemical equilibrium, mixing models for PDF methods, comparison of predicted and measured H2O mass fractions in turbulent nonpremixed jet flames, experimental evidence of preferential diffusion in turbulent jet flames, and computation of turbulent reacting flows.
NASA Astrophysics Data System (ADS)
Zubanov, V. M.; Stepanov, D. V.; Shabliy, L. S.
2017-01-01
The article describes the method for simulation of transient combustion processes in the rocket engine. The engine operates on gaseous propellant: oxygen and hydrogen. Combustion simulation was performed using the ANSYS CFX software. Three reaction mechanisms for the stationary mode were considered and described in detail. Reactions mechanisms have been taken from several sources and verified. The method for converting ozone properties from the Shomate equation to the NASA-polynomial format was described in detail. The way for obtaining quick CFD-results with intermediate combustion components using an EDM model was found. Modeling difficulties with combustion model Finite Rate Chemistry, associated with a large scatter of reference data were identified and described. The way to generate the Flamelet library with CFX-RIF is described. Formulated adequate reaction mechanisms verified at a steady state have also been tested for transient simulation. The Flamelet combustion model was recognized as adequate for the transient mode. Integral parameters variation relates to the values obtained during stationary simulation. A cyclic irregularity of the temperature field, caused by precession of the vortex core, was detected in the chamber with the proposed simulation technique. Investigations of unsteady processes of rocket engines including the processes of ignition were proposed as the area for application of the described simulation technique.
NASA Astrophysics Data System (ADS)
Furukawa, Junichi; Noguchi, Yoshiki; Hirano, Toshisuke; Williams, Forman A.
2002-07-01
The density change across premixed flames propagating in turbulent flows modifies the turbulence. The nature of that modification depends on the regime of turbulent combustion, the burner design, the orientation of the turbulent flame and the position within the flame. The present study addresses statistically stationary turbulent combustion in the flame-sheet regime, in which the laminar-flame thickness is less than the Kolmogorov scale, for flames stabilized on a vertically oriented cylindrical burner having fully developed upward turbulent pipe flow upstream from the exit. Under these conditions, rapidly moving wrinkled laminar flamelets form the axisymmetric turbulent flame brush that is attached to the burner exit. Predictions have been made of changes in turbulence properties across laminar flamelets in such situations, but very few measurements have been performed to test the predictions. The present work measures individual velocity changes and changes in turbulence across flamelets at different positions in the turbulent flame brush for three different equivalence ratios, for comparison with theory.
Modeling and simulation of combustion dynamics in lean-premixed swirl-stabilized gas-turbine engines
NASA Astrophysics Data System (ADS)
Huang, Ying
This research focuses on the modeling and simulation of combustion dynamics in lean-premixed gas-turbines engines. The primary objectives are: (1) to establish an efficient and accurate numerical framework for the treatment of unsteady flame dynamics; and (2) to investigate the parameters and mechanisms responsible for driving flow oscillations in a lean-premixed gas-turbine combustor. The energy transfer mechanisms among mean flow motions, periodic motions and background turbulent motions in turbulent reacting flow are first explored using a triple decomposition technique. Then a comprehensive numerical study of the combustion dynamics in a lean-premixed swirl-stabilized combustor is performed. The analysis treats the conservation equations in three dimensions and takes into account finite-rate chemical reactions and variable thermophysical properties. Turbulence closure is achieved using a large-eddy-simulation (LES) technique. The compressible-flow version of the Smagorinsky model is employed to describe subgrid-scale turbulent motions and their effect on large-scale structures. A level-set flamelet library approach is used to simulate premixed turbulent combustion. In this approach, the mean flame location is modeled using a level-set G-equation, where G is defined as a distance function. Thermophysical properties are obtained using a presumed probability density function (PDF) along with a laminar flamelet library. The governing equations and the associated boundary conditions are solved by means of a four-step Runge-Kutta scheme along with the implementation of the message passing interface (MPI) parallel computing architecture. The analysis allows for a detailed investigation into the interaction between turbulent flow motions and oscillatory combustion of a swirl-stabilized injector. Results show good agreement with an analytical solution and experimental data in terms of acoustic properties and flame evolution. A study of flame bifurcation from a stable state to an unstable state indicates that the inlet flow temperature and equivalence ratio are the two most important variables determining the stability characteristics of the combustor. Under unstable operating conditions, several physical processes responsible for driving combustion instabilities in the chamber have been identified and quantified. These processes include vortex shedding and acoustic interaction, coupling between the flame evolution and local flow oscillations, vortex and flame interaction and coupling between heat release and acoustic motions. The effects of inlet swirl number on the flow development and flame dynamics in the chamber are also carefully studied. In the last part of this thesis, an analytical model is developed using triple decomposition techniques to model the combustion response of turbulent premixed flames to acoustic oscillations.
Understanding the ignition mechanism of high-pressure spray flames
Dahms, Rainer N.; Paczko, Günter A.; Skeen, Scott A.; ...
2016-10-25
A conceptual model for turbulent ignition in high-pressure spray flames is presented. The model is motivated by first-principles simulations and optical diagnostics applied to the Sandia n-dodecane experiment. The Lagrangian flamelet equations are combined with full LLNL kinetics (2755 species; 11,173 reactions) to resolve all time and length scales and chemical pathways of the ignition process at engine-relevant pressures and turbulence intensities unattainable using classic DNS. The first-principles value of the flamelet equations is established by a novel chemical explosive mode-diffusion time scale analysis of the fully-coupled chemical and turbulent time scales. Contrary to conventional wisdom, this analysis reveals thatmore » the high Damköhler number limit, a key requirement for the validity of the flamelet derivation from the reactive Navier–Stokes equations, applies during the entire ignition process. Corroborating Rayleigh-scattering and formaldehyde PLIF with simultaneous schlieren imaging of mixing and combustion are presented. Our combined analysis establishes a characteristic temporal evolution of the ignition process. First, a localized first-stage ignition event consistently occurs in highest temperature mixture regions. This initiates, owed to the intense scalar dissipation, a turbulent cool flame wave propagating from this ignition spot through the entire flow field. This wave significantly decreases the ignition delay of lower temperature mixture regions in comparison to their homogeneous reference. This explains the experimentally observed formaldehyde formation across the entire spray head prior to high-temperature ignition which consistently occurs first in a broad range of rich mixture regions. There, the combination of first-stage ignition delay, shortened by the cool flame wave, and the subsequent delay until second-stage ignition becomes minimal. A turbulent flame subsequently propagates rapidly through the entire mixture over time scales consistent with experimental observations. As a result, we demonstrate that the neglect of turbulence-chemistry-interactions fundamentally fails to capture the key features of this ignition process.« less
NASA Astrophysics Data System (ADS)
Consalvi, J. L.; Nmira, F.
2016-03-01
The main objective of this article is to quantify the influence of the soot absorption coefficient-Planck function correlation on radiative loss and flame structure in an oxygen-enhanced propane turbulent diffusion flame. Calculations were run with and without accounting for this correlation by using a standard k-ε model and the steady laminar flamelet model (SLF) coupled to a joint Probability Density Function (PDF) of mixture fraction, enthalpy defect, scalar dissipation rate, and soot quantities. The PDF transport equation is solved by using a Stochastic Eulerian Field (SEF) method. The modeling of soot production is carried out by using a flamelet-based semi-empirical acetylene/benzene soot model. Radiative heat transfer is modeled by using a wide band correlated-k model and turbulent radiation interactions (TRI) are accounted for by using the Optically-Thin Fluctuation Approximation (OTFA). Predicted soot volume fraction, radiant wall heat flux distribution and radiant fraction are in good agreement with the available experimental data. Model results show that soot absorption coefficient and Planck function are negatively correlated in the region of intense soot emission. Neglecting this correlation is found to increase significantly the radiative loss leading to a substantial impact on flame structure in terms of mean and rms values of temperature. In addition mean and rms values of soot volume fraction are found to be less sensitive to the correlation than temperature since soot formation occurs mainly in a region where its influence is low.
A Method for Large Eddy Simulation of Acoustic Combustion Instabilities
NASA Astrophysics Data System (ADS)
Wall, Clifton; Pierce, Charles; Moin, Parviz
2002-11-01
A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustic combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Both of these characteristics suggest the use of larger time steps than those allowed by an acoustic CFL condition. The turbulent combustion model used is the Combined Conserved Scalar/Level Set Flamelet model of Duchamp de Lageneste and Pitsch for partially premixed combustion. Comparison of LES results to the experiments of Besson et al will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Chao; Lignell, David O.; Hawkes, Evatt R.
Here, the effect of differential molecular diffusion (DMD) in turbulent non-premixed flames is studied by examining two previously reported DNS of temporally evolving planar jet flames, one with CO/H 2 as the fuel and the other with C 2H 4 as the fuel. The effect of DMD in the CO/H 2 DNS flames in which H 2 is part of fuel is found to behave similar to laminar flamelet, while in the C 2H 4 DNS flames in which H 2 is not present in the fuel it is similar to laminar flamelet in early stages but becomes different frommore » laminar flamelet later. The scaling of the effect of DMD with respect to the Reynolds number Re is investigated in the CO/H 2 DNS flames, and an evident power law scaling (~Re –a with a a positive constant) is observed. The scaling of the effect of DMD with respect to the Damkohler number Da is explored in both laminar counter-flow jet C 2H 4 diffusion flames and the C 2H 4 DNS flames. A power law scaling (~ Daa with a a positive constant) is clearly demonstrated for C 2H 4 nonpremixed flames.« less
NASA Technical Reports Server (NTRS)
Westra, Doug G.; West, Jeffrey S.; Richardson, Brian R.
2015-01-01
Historically, the analysis and design of liquid rocket engines (LREs) has relied on full-scale testing and one-dimensional empirical tools. The testing is extremely expensive and the one-dimensional tools are not designed to capture the highly complex, and multi-dimensional features that are inherent to LREs. Recent advances in computational fluid dynamics (CFD) tools have made it possible to predict liquid rocket engine performance, stability, to assess the effect of complex flow features, and to evaluate injector-driven thermal environments, to mitigate the cost of testing. Extensive efforts to verify and validate these CFD tools have been conducted, to provide confidence for using them during the design cycle. Previous validation efforts have documented comparisons of predicted heat flux thermal environments with test data for a single element gaseous oxygen (GO2) and gaseous hydrogen (GH2) injector. The most notable validation effort was a comprehensive validation effort conducted by Tucker et al. [1], in which a number of different groups modeled a GO2/GH2 single element configuration by Pal et al [2]. The tools used for this validation comparison employed a range of algorithms, from both steady and unsteady Reynolds Averaged Navier-Stokes (U/RANS) calculations, large-eddy simulations (LES), detached eddy simulations (DES), and various combinations. A more recent effort by Thakur et al. [3] focused on using a state-of-the-art CFD simulation tool, Loci/STREAM, on a two-dimensional grid. Loci/STREAM was chosen because it has a unique, very efficient flamelet parameterization of combustion reactions that are too computationally expensive to simulate with conventional finite-rate chemistry calculations. The current effort focuses on further advancement of validation efforts, again using the Loci/STREAM tool with the flamelet parameterization, but this time with a three-dimensional grid. Comparisons to the Pal et al. heat flux data will be made for both RANS and Hybrid RANSLES/ Detached Eddy simulations (DES). Computation costs will be reported, along with comparison of accuracy and cost to much less expensive two-dimensional RANS simulations of the same geometry.
Han, Chao; Lignell, David O.; Hawkes, Evatt R.; ...
2017-02-09
Here, the effect of differential molecular diffusion (DMD) in turbulent non-premixed flames is studied by examining two previously reported DNS of temporally evolving planar jet flames, one with CO/H 2 as the fuel and the other with C 2H 4 as the fuel. The effect of DMD in the CO/H 2 DNS flames in which H 2 is part of fuel is found to behave similar to laminar flamelet, while in the C 2H 4 DNS flames in which H 2 is not present in the fuel it is similar to laminar flamelet in early stages but becomes different frommore » laminar flamelet later. The scaling of the effect of DMD with respect to the Reynolds number Re is investigated in the CO/H 2 DNS flames, and an evident power law scaling (~Re –a with a a positive constant) is observed. The scaling of the effect of DMD with respect to the Damkohler number Da is explored in both laminar counter-flow jet C 2H 4 diffusion flames and the C 2H 4 DNS flames. A power law scaling (~ Daa with a a positive constant) is clearly demonstrated for C 2H 4 nonpremixed flames.« less
Modelling thermal radiation in buoyant turbulent diffusion flames
NASA Astrophysics Data System (ADS)
Consalvi, J. L.; Demarco, R.; Fuentes, A.
2012-10-01
This work focuses on the numerical modelling of radiative heat transfer in laboratory-scale buoyant turbulent diffusion flames. Spectral gas and soot radiation is modelled by using the Full-Spectrum Correlated-k (FSCK) method. Turbulence-Radiation Interactions (TRI) are taken into account by considering the Optically-Thin Fluctuation Approximation (OTFA), the resulting time-averaged Radiative Transfer Equation (RTE) being solved by the Finite Volume Method (FVM). Emission TRIs and the mean absorption coefficient are then closed by using a presumed probability density function (pdf) of the mixture fraction. The mean gas flow field is modelled by the Favre-averaged Navier-Stokes (FANS) equation set closed by a buoyancy-modified k-ɛ model with algebraic stress/flux models (ASM/AFM), the Steady Laminar Flamelet (SLF) model coupled with a presumed pdf approach to account for Turbulence-Chemistry Interactions, and an acetylene-based semi-empirical two-equation soot model. Two sets of experimental pool fire data are used for validation: propane pool fires 0.3 m in diameter with Heat Release Rates (HRR) of 15, 22 and 37 kW and methane pool fires 0.38 m in diameter with HRRs of 34 and 176 kW. Predicted flame structures, radiant fractions, and radiative heat fluxes on surrounding surfaces are found in satisfactory agreement with available experimental data across all the flames. In addition further computations indicate that, for the present flames, the gray approximation can be applied for soot with a minor influence on the results, resulting in a substantial gain in Computer Processing Unit (CPU) time when the FSCK is used to treat gas radiation.
Stochastic modeling of unsteady extinction in turbulent non-premixed combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lackmann, T.; Hewson, J. C.; Knaus, R. C.
Turbulent fluctuations of the scalar dissipation rate have a major impact on extinction in non-premixed combustion. Recently, an unsteady extinction criterion has been developed (Hewson, 2013) that predicts extinction dependent on the duration and the magnitude of dissipation rate fluctuations exceeding a critical quenching value; this quantity is referred to as the dissipation impulse. Furthermore, the magnitude of the dissipation impulse corresponding to unsteady extinction is related to the difficulty with which a flamelet is exintguished, based on the steady-state S-curve.
Stochastic modeling of unsteady extinction in turbulent non-premixed combustion
Lackmann, T.; Hewson, J. C.; Knaus, R. C.; ...
2016-07-19
Turbulent fluctuations of the scalar dissipation rate have a major impact on extinction in non-premixed combustion. Recently, an unsteady extinction criterion has been developed (Hewson, 2013) that predicts extinction dependent on the duration and the magnitude of dissipation rate fluctuations exceeding a critical quenching value; this quantity is referred to as the dissipation impulse. Furthermore, the magnitude of the dissipation impulse corresponding to unsteady extinction is related to the difficulty with which a flamelet is exintguished, based on the steady-state S-curve.
Inward propagating chemical waves in Taylor vortices.
Thompson, Barnaby W; Novak, Jan; Wilson, Mark C T; Britton, Melanie M; Taylor, Annette F
2010-04-01
Advection-reaction-diffusion (ARD) waves in the Belousov-Zhabotinsky reaction in steady Taylor-Couette vortices have been visualized using magnetic-resonance imaging and simulated using an adapted Oregonator model. We show how propagating wave behavior depends on the ratio of advective, chemical and diffusive time scales. In simulations, inward propagating spiral flamelets are observed at high Damköhler number (Da). At low Da, the reaction distributes itself over several vortices and then propagates inwards as contracting ring pulses--also observed experimentally.
Validation of an LES Model for Soot Evolution against DNS Data in Turbulent Jet Flames
NASA Astrophysics Data System (ADS)
Mueller, Michael
2012-11-01
An integrated modeling approach for soot evolution in turbulent reacting flows is validated against three-dimensional Direct Numerical Simulation (DNS) data in a set of n-heptane nonpremixed temporal jet flames. As in the DNS study, the evolution of the soot population is described statistically with the Hybrid Method of Moments (HMOM). The oxidation of the fuel and formation of soot precursors are described with the Radiation Flamelet/Progress Variable (RFPV) model that includes an additional transport equation for Polycyclic Aromatic Hydrocarbons (PAH) to account for the slow chemistry governing these species. In addition, the small-scale interactions between soot, chemistry, and turbulence are described with a presumed subfilter PDF approach that accounts for the very large spatial intermittency characterizing soot in turbulent reacting flows. The DNS dataset includes flames at three different Damköhler numbers to study the influence of global mixing rates on the evolution of PAH and soot. In this work, the ability of the model to capture these trends quantitatively as Damköhler number varies is investigated. In order to reliably assess the LES approach, the LES is initialized from the filtered DNS data after an initial transitional period in an effort to minimize the hydrodynamic differences between the DNS and the LES.
Numerical investigation of a helicopter combustion chamber using LES and tabulated chemistry
NASA Astrophysics Data System (ADS)
Auzillon, Pierre; Riber, Eléonore; Gicquel, Laurent Y. M.; Gicquel, Olivier; Darabiha, Nasser; Veynante, Denis; Fiorina, Benoît
2013-01-01
This article presents Large Eddy Simulations (LES) of a realistic aeronautical combustor device: the chamber CTA1 designed by TURBOMECA. Under nominal operating conditions, experiments show hot spots observed on the combustor walls, in the vicinity of the injectors. These high temperature regions disappear when modifying the fuel stream equivalence ratio. In order to account for detailed chemistry effects within LES, the numerical simulation uses the recently developed turbulent combustion model F-TACLES (Filtered TAbulated Chemistry for LES). The principle of this model is first to generate a lookup table where thermochemical variables are computed from a set of filtered laminar unstrained premixed flamelets. To model the interactions between the flame and the turbulence at the subgrid scale, a flame wrinkling analytical model is introduced and the Filtered Density Function (FDF) of the mixture fraction is modeled by a β function. Filtered thermochemical quantities are stored as a function of three coordinates: the filtered progress variable, the filtered mixture fraction and the mixture fraction subgrid scale variance. The chemical lookup table is then coupled with the LES using a mathematical formalism that ensures an accurate prediction of the flame dynamics. The numerical simulation of the CTA1 chamber with the F-TACLES turbulent combustion model reproduces fairly the temperature fields observed in experiments. In particular the influence of the fuel stream equivalence ratio on the flame position is well captured.
Stratified turbulent Bunsen flames: flame surface analysis and flame surface density modelling
NASA Astrophysics Data System (ADS)
Ramaekers, W. J. S.; van Oijen, J. A.; de Goey, L. P. H.
2012-12-01
In this paper it is investigated whether the Flame Surface Density (FSD) model, developed for turbulent premixed combustion, is also applicable to stratified flames. Direct Numerical Simulations (DNS) of turbulent stratified Bunsen flames have been carried out, using the Flamelet Generated Manifold (FGM) reduction method for reaction kinetics. Before examining the suitability of the FSD model, flame surfaces are characterized in terms of thickness, curvature and stratification. All flames are in the Thin Reaction Zones regime, and the maximum equivalence ratio range covers 0.1⩽φ⩽1.3. For all flames, local flame thicknesses correspond very well to those observed in stretchless, steady premixed flamelets. Extracted curvature radii and mixing length scales are significantly larger than the flame thickness, implying that the stratified flames all burn in a premixed mode. The remaining challenge is accounting for the large variation in (subfilter) mass burning rate. In this contribution, the FSD model is proven to be applicable for Large Eddy Simulations (LES) of stratified flames for the equivalence ratio range 0.1⩽φ⩽1.3. Subfilter mass burning rate variations are taken into account by a subfilter Probability Density Function (PDF) for the mixture fraction, on which the mass burning rate directly depends. A priori analysis point out that for small stratifications (0.4⩽φ⩽1.0), the replacement of the subfilter PDF (obtained from DNS data) by the corresponding Dirac function is appropriate. Integration of the Dirac function with the mass burning rate m=m(φ), can then adequately model the filtered mass burning rate obtained from filtered DNS data. For a larger stratification (0.1⩽φ⩽1.3), and filter widths up to ten flame thicknesses, a β-function for the subfilter PDF yields substantially better predictions than a Dirac function. Finally, inclusion of a simple algebraic model for the FSD resulted only in small additional deviations from DNS data, thereby rendering this approach promising for application in LES.
LES-Modeling of a Partially Premixed Flame using a Deconvolution Turbulence Closure
NASA Astrophysics Data System (ADS)
Wang, Qing; Wu, Hao; Ihme, Matthias
2015-11-01
The modeling of the turbulence/chemistry interaction in partially premixed and multi-stream combustion remains an outstanding issue. By extending a recently developed constrained minimum mean-square error deconvolution (CMMSED) method, to objective of this work is to develop a source-term closure for turbulent multi-stream combustion. In this method, the chemical source term is obtained from a three-stream flamelet model, and CMMSED is used as closure model, thereby eliminating the need for presumed PDF-modeling. The model is applied to LES of a piloted turbulent jet flame with inhomogeneous inlets, and simulation results are compared with experiments. Comparisons with presumed PDF-methods are performed, and issues regarding resolution and conservation of the CMMSED method are examined. The author would like to acknowledge the support of funding from Stanford Graduate Fellowship.
NASA Astrophysics Data System (ADS)
Iqbal, S.; Benim, A. C.; Fischer, S.; Joos, F.; Kluβ, D.; Wiedermann, A.
2016-10-01
Turbulent reacting flows in a generic swirl gas turbine combustor model are investigated both numerically and experimentally. In the investigation, an emphasis is placed upon the external flue gas recirculation, which is a promising technology for increasing the efficiency of the carbon capture and storage process, which, however, can change the combustion behaviour significantly. A further emphasis is placed upon the investigation of alternative fuels such as biogas and syngas in comparison to the conventional natural gas. Flames are also investigated numerically using the open source CFD software OpenFOAM. In the numerical simulations, a laminar flamelet model based on mixture fraction and reaction progress variable is adopted. As turbulence model, the SST model is used within a URANS concept. Computational results are compared with the experimental data, where a fair agreement is observed.
LES of Swirling Reacting Flows via the Unstructured scalar-FDF Solver
NASA Astrophysics Data System (ADS)
Ansari, Naseem; Pisciuneri, Patrick; Strakey, Peter; Givi, Peyman
2011-11-01
Swirling flames pose a significant challenge for computational modeling due to the presence of recirculation regions and vortex shedding. In this work, results are presented of LES of two swirl stabilized non-premixed flames (SM1 and SM2) via the FDF methodology. These flames are part of the database for validation of turbulent-combustion models. The scalar-FDF is simulated on a domain discretized by unstructured meshes, and is coupled with a finite volume flow solver. In the SM1 flame (with a low swirl number) chemistry is described by the flamelet model based on the full GRI 2.11 mechanism. The SM2 flame (with a high swirl number) is simulated via a 46-step 17-species mechanism. The simulated results are assessed via comparison with experimental data.
An Experimental Investigation of the Laminar Flamelet Concept for Soot Properties
NASA Technical Reports Server (NTRS)
Diez, F. J.; Aalburg, C.; Sunderland, P. B.; Urban, D. L.; Yuan, Z.-G.; Faeth, G. M.
2007-01-01
The soot properties of round, nonbuoyant, laminar jet diffusion flames are described, based on experiments at microgravity carried out on orbit during three flights of the Space Shuttle Columbia, (Flights STS-83, 94 and 107). Experimental conditions included ethylene- and propane-fueled flames burning in still air at an ambient temperature of 300 K and ambient pressures of 35-100 kPa. Measurements included soot volume fraction distributions using deconvoluted laser extinction imaging, and soot temperature distributions using deconvoluted multiline emission imaging. Flowfield modeling based on the work of Spalding is presented. The present work explores whether soot properties of these flames are universal functions of mixture fraction, i.e., whether they satisfy soot state relationships. Measurements are presented, including radiative emissions and distributions of soot temperature and soot volume fraction. It is shown that most of the volume of these flames is bounded by the dividing streamline and thus should follow residence time state relationships. Most streamlines from the fuel supply to the surroundings are found to exhibit nearly the same maximum soot volume fraction and temperature. The radiation intensity along internal streamlines also is found to have relatively uniform values. Finally, soot state relationships were observed, i.e., soot volume fraction was found to correlate with estimated mixture fraction for each fuel/pressure selection. These results support the existence of soot property state relationships for steady nonbuoyant laminar diffusion flames, and thus in a large class of practical turbulent diffusion flames through the application of the laminar flamelet concept.
The structure of partially-premixed methane/air flames under varying premixing
NASA Astrophysics Data System (ADS)
Kluzek, Celine; Karpetis, Adonios
2008-11-01
The present work examines the spatial and scalar structure of laminar, partially premixed methane/air flames with the objective of developing flamelet mappings that capture the effect of varying premixture strength (air addition in fuel.) Experimental databases containing full thermochemistry measurements within laminar axisymmetric flames were obtained at Sandia National Laboratories, and the measurements of all major species and temperature are compared to opposed-jet one-dimensional flow simulation using Cantera and the full chemical kinetic mechanism of GRI 3.0. Particular emphasis is placed on the scalar structure of the laminar flames, and the formation of flamelet mappings that capture all of the salient features of thermochemistry in a conserved scalar representation. Three different premixture strengths were examined in detail: equivalence ratios of 1.8, 2.2, and 3.17 resulted in clear differences in the flame scalar structure, particularly in the position of the rich premixed flame zone and the attendant levels of major and intermediate species (carbon monoxide and hydrogen).
Real gas CFD simulations of hydrogen/oxygen supercritical combustion
NASA Astrophysics Data System (ADS)
Pohl, S.; Jarczyk, M.; Pfitzner, M.; Rogg, B.
2013-03-01
A comprehensive numerical framework has been established to simulate reacting flows under conditions typically encountered in rocket combustion chambers. The model implemented into the commercial CFD Code ANSYS CFX includes appropriate real gas relations based on the volume-corrected Peng-Robinson (PR) equation of state (EOS) for the flow field and a real gas extension of the laminar flamelet combustion model. The results indicate that the real gas relations have a considerably larger impact on the flow field than on the detailed flame structure. Generally, a realistic flame shape could be achieved for the real gas approach compared to experimental data from the Mascotte test rig V03 operated at ONERA when the differential diffusion processes were only considered within the flame zone.
An Investigation of a Hybrid Mixing Timescale Model for PDF Simulations of Turbulent Premixed Flames
NASA Astrophysics Data System (ADS)
Zhou, Hua; Kuron, Mike; Ren, Zhuyin; Lu, Tianfeng; Chen, Jacqueline H.
2016-11-01
Transported probability density function (TPDF) method features the generality for all combustion regimes, which is attractive for turbulent combustion simulations. However, the modeling of micromixing due to molecular diffusion is still considered to be a primary challenge for TPDF method, especially in turbulent premixed flames. Recently, a hybrid mixing rate model for TPDF simulations of turbulent premixed flames has been proposed, which recovers the correct mixing rates in the limits of flamelet regime and broken reaction zone regime while at the same time aims to properly account for the transition in between. In this work, this model is employed in TPDF simulations of turbulent premixed methane-air slot burner flames. The model performance is assessed by comparing the results from both direct numerical simulation (DNS) and conventional constant mechanical-to-scalar mixing rate model. This work is Granted by NSFC 51476087 and 91441202.
A dynamic subgrid-scale model for LES of the G-equation
NASA Technical Reports Server (NTRS)
Bourlioux, A.; Im, H. G.; Ferziger, J. H.
1996-01-01
Turbulent combustion is a difficult subject as it must deal with all of the issues found in both turbulence and combustion. (We consider only premixed flames in this paper, but some of the ideas can be applied to the non-premixed case.) As in many other fields, there are two limiting cases that are easier to deal with than the general case. These are the situations in which the chemical time scale is either much shorter or much longer than the time scale associated with the turbulence. We deal with the former case. In this limit, the flame is thin compared to the turbulence length scales and can be idealized as an infinitely thin sheet. This is commonly called the flamelet regime; it has been the subject of many papers and the basis for many models (see, e.g., Linan & Williams 1993). In the flamelet model, the local flame structure is assumed to be identical to the laminar flame structure; thus the flame propagates normal to itself at the laminar flame speed, S(sub L). This allows the use of simple approximations. For example, one expects the rate of consumption of fuel to be proportional to the area of the flame surface. This idea allowed Damkohler (1940) to propose that the wrinkled flame could be replaced by a smooth one which travels at the turbulent flame speed, S(sub T), defined by S(sub T)/S(sub L) = A(sub L) /A(sub P) where A(sub L) is the total flame surface area and AP is the area projected onto the mean direction of propagation. This relation can be expected to be valid when the flame structure is modified only slightly by the turbulence. More recent approaches have attempted to relate the turbulent flame speed to turbulence intensity, u(sub '), which presumably, characterizes the wrinkling of the flame.
Evaluation of a Consistent LES/PDF Method Using a Series of Experimental Spray Flames
NASA Astrophysics Data System (ADS)
Heye, Colin; Raman, Venkat
2012-11-01
A consistent method for the evolution of the joint-scalar probability density function (PDF) transport equation is proposed for application to large eddy simulation (LES) of turbulent reacting flows containing evaporating spray droplets. PDF transport equations provide the benefit of including the chemical source term in closed form, however, additional terms describing LES subfilter mixing must be modeled. The recent availability of detailed experimental measurements provide model validation data for a wide range of evaporation rates and combustion regimes, as is well-known to occur in spray flames. In this work, the experimental data will used to investigate the impact of droplet mass loading and evaporation rates on the subfilter scalar PDF shape in comparison with conventional flamelet models. In addition, existing model term closures in the PDF transport equations are evaluated with a focus on their validity in the presence of regime changes.
NASA Technical Reports Server (NTRS)
Drozda, Tomasz G.; Quinlan, Jesse R.; Pisciuneri, Patrick H.; Yilmaz, S. Levent
2012-01-01
Significant progress has been made in the development of subgrid scale (SGS) closures based on a filtered density function (FDF) for large eddy simulations (LES) of turbulent reacting flows. The FDF is the counterpart of the probability density function (PDF) method, which has proven effective in Reynolds averaged simulations (RAS). However, while systematic progress is being made advancing the FDF models for relatively simple flows and lab-scale flames, the application of these methods in complex geometries and high speed, wall-bounded flows with shocks remains a challenge. The key difficulties are the significant computational cost associated with solving the FDF transport equation and numerically stiff finite rate chemistry. For LES/FDF methods to make a more significant impact in practical applications a pragmatic approach must be taken that significantly reduces the computational cost while maintaining high modeling fidelity. An example of one such ongoing effort is at the NASA Langley Research Center, where the first generation FDF models, namely the scalar filtered mass density function (SFMDF) are being implemented into VULCAN, a production-quality RAS and LES solver widely used for design of high speed propulsion flowpaths. This effort leverages internal and external collaborations to reduce the overall computational cost of high fidelity simulations in VULCAN by: implementing high order methods that allow reduction in the total number of computational cells without loss in accuracy; implementing first generation of high fidelity scalar PDF/FDF models applicable to high-speed compressible flows; coupling RAS/PDF and LES/FDF into a hybrid framework to efficiently and accurately model the effects of combustion in the vicinity of the walls; developing efficient Lagrangian particle tracking algorithms to support robust solutions of the FDF equations for high speed flows; and utilizing finite rate chemistry parametrization, such as flamelet models, to reduce the number of transported reactive species and remove numerical stiffness. This paper briefly introduces the SFMDF model (highlighting key benefits and challenges), and discusses particle tracking for flows with shocks, the hybrid coupled RAS/PDF and LES/FDF model, flamelet generated manifolds (FGM) model, and the Irregularly Portioned Lagrangian Monte Carlo Finite Difference (IPLMCFD) methodology for scalable simulation of high-speed reacting compressible flows.
Approximate Deconvolution and Explicit Filtering For LES of a Premixed Turbulent Jet Flame
2014-09-19
from laminar flamelets computed with the GRI -mechanism for methane-air combustion (Smith et al. 1999) and the progress variable Yc is defined as in... gri - mech/. Subramanian, V., P. Domingo, and L. Vervisch (2010). Large-Eddy Simulation of forced igni- tion of an annular bluff-body burner. Combust
DNS and modeling of the interaction between turbulent premixed flames and walls
NASA Technical Reports Server (NTRS)
Poinsot, T. J.; Haworth, D. C.
1992-01-01
The interaction between turbulent premixed flames and walls is studied using a two-dimensional full Navier-Stokes solver with simple chemistry. The effects of wall distance on the local and global flame structure are investigated. Quenching distances and maximum wall heat fluxes during quenching are computed in laminar cases and are found to be comparable to experimental and analytical results. For turbulent cases, it is shown that quenching distances and maximum heat fluxes remain of the same order as for laminar flames. Based on simulation results, a 'law-of-the-wall' model is derived to describe the interaction between a turbulent premixed flame and a wall. This model is constructed to provide reasonable behavior of flame surface density near a wall under the assumption that flame-wall interaction takes place at scales smaller than the computational mesh. It can be implemented in conjunction with any of several recent flamelet models based on a modeled surface density equation, with no additional constraints on mesh size or time step.
A Novel Strategy for Numerical Simulation of High-speed Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Sheikhi, M. R. H.; Drozda, T. G.; Givi, P.
2003-01-01
The objective of this research is to improve and implement the filtered mass density function (FDF) methodology for large eddy simulation (LES) of high-speed reacting turbulent flows. We have just completed Year 1 of this research. This is the Final Report on our activities during the period: January 1, 2003 to December 31, 2003. 2002. In the efforts during the past year, LES is conducted of the Sandia Flame D, which is a turbulent piloted nonpremixed methane jet flame. The subgrid scale (SGS) closure is based on the scalar filtered mass density function (SFMDF) methodology. The SFMDF is basically the mass weighted probability density function (PDF) of the SGS scalar quantities. For this flame (which exhibits little local extinction), a simple flamelet model is used to relate the instantaneous composition to the mixture fraction. The modelled SFMDF transport equation is solved by a hybrid finite-difference/Monte Carlo scheme.
2006-12-01
27], [28] on soot nucleation, and [29] on the soot formation in diesel engines . [27] discussed the unresolved problems in SOx, NOx , and soot...used LEM approach to study aerosol dynamics in engine exhaust plumes. Recently, [41] used detailed NOx mechanism combined with MOM to predict the...combustion engines . For instance, the laminar flamelet approach used by [43, 44, 45], allows the usage of a detailed chemical mechanism but is not
The Interaction of High-Speed Turbulence with Flames: Turbulent Flame Speed
2010-08-05
AND ADDRESS(ES) 10. SPONSOR / MONITOR’S ACRONYM(S) 9 . SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) 11. SPONSOR / MONITOR’S REPORT NUMBER(S...UL 38 A.Y. Poludnenko (202) 767-6582 05 -08-2010 Memorandum Report Turbulent premixed combustion Turbulence Flamelet Turbulent flame speed Office of...3.4. Stretch factor and the balance between ST and AT ...................................................................... 9 4. Flame surface
Physical and Chemical Processes in Turbulent Flames
2015-06-23
positive aerodynamics stretch, into a multitude of wrinkled flamelets possessing either positive or negative stretch, such that the intensified...flame surface, such as the flame surface area ratio, build up this global measure. The turbulent flame surface is typically highly wrinkled and folded...consider a filtered/average location of the flame positions to represent a smooth surface. The information contained in the wrinkled surface if
Evaluation of a strain-sensitive transport model in LES of turbulent nonpremixed sooting flames
NASA Astrophysics Data System (ADS)
Lew, Jeffry K.; Yang, Suo; Mueller, Michael E.
2017-11-01
Direct Numerical Simulations (DNS) of turbulent nonpremixed jet flames have revealed that Polycyclic Aromatic Hydrocarbons (PAH) are confined to spatially intermittent regions of low scalar dissipation rate due to their slow formation chemistry. The length scales of these regions are on the order of the Kolmogorov scale or smaller, where molecular diffusion effects dominate over turbulent transport effects irrespective of the large-scale turbulent Reynolds number. A strain-sensitive transport model has been developed to identify such species whose slow chemistry, relative to local mixing rates, confines them to these small length scales. In a conventional nonpremixed ``flamelet'' approach, these species are then modeled with their molecular Lewis numbers, while remaining species are modeled with an effective unity Lewis number. A priori analysis indicates that this strain-sensitive transport model significantly affects PAH yield in nonpremixed flames with essentially no impact on temperature and major species. The model is applied with Large Eddy Simulation (LES) to a series of turbulent nonpremixed sooting jet flames and validated via comparisons with experimental measurements of soot volume fraction.
NASA Technical Reports Server (NTRS)
Chen, J. H.; Mahalingam, S.; Puri, I. K.; Vervisch, L.
1992-01-01
The interaction between a quasi-laminar flame and a turbulent flowfield is investigated through direct numerical simulations (DNS) of reacting flow in two- and three-dimensional domains. Effects due to finite-rate chemistry are studied using a single step global reaction A (fuel) + B (oxidizer) yields P (product), and by varying a global Damkoehler number, as a result of which the turbulence-chemistry interaction in the flame is found to generate a wide variety of conditions, ranging from near-equilibrium to near-extinction. Differential diffusion effects are studied by changing the Schmidt number of one reactive species to one-half. It is observed that laminar flamelet response is followed within the turbulent flowfield, except in regions where transient effects seem to dominate.
2009-06-30
the flamelet solution is indictated in Figure 2. The increase of strain rate enhances the heat and species transport close to the flame front, which...any other aspect c this burden to Department of Defense, Washington Headquarters Services. Directorate for Information Operations and Reports (0704...of design attributes (e.g., injection port size and location, center post recess distance, etc.) and operating conditions (e.g., chamber pressure
2016-03-24
thickened preheat (TP) regime that is bounded by the Klimov-Williams limit, (b) the broken reaction layers (BR) boundary and the partially-distributed...b) the broken reaction layers (BR) boundary that is bounded by Norbert Peters predicted limit, and the partially-distributed reactions (PDR...Nomenclature BR = broken reaction layer boundary DR = distributed reaction zone boundary Ka = Karlovitz number of Peters (Eq. 1) equal to (δF,L
Verification and Improvement of Flamelet Approach for Non-Premixed Flames
NASA Technical Reports Server (NTRS)
Zaitsev, S.; Buriko, Yu.; Guskov, O.; Kopchenov, V.; Lubimov, D.; Tshepin, S.; Volkov, D.
1997-01-01
Studies in the mathematical modeling of the high-speed turbulent combustion has received renewal attention in the recent years. The review of fundamentals, approaches and extensive bibliography was presented by Bray, Libbi and Williams. In order to obtain accurate predictions for turbulent combustible flows, the effects of turbulent fluctuations on the chemical source terms should be taken into account. The averaging of chemical source terms requires to utilize probability density function (PDF) model. There are two main approaches which are dominant in high-speed combustion modeling now. In the first approach, PDF form is assumed based on intuitia of modelliers (see, for example, Spiegler et.al.; Girimaji; Baurle et.al.). The second way is much more elaborate and it is based on the solution of evolution equation for PDF. This approach was proposed by S.Pope for incompressible flames. Recently, it was modified for modeling of compressible flames in studies of Farschi; Hsu; Hsu, Raji, Norris; Eifer, Kollman. But its realization in CFD is extremely expensive in computations due to large multidimensionality of PDF evolution equation (Baurle, Hsu, Hassan).
Importance of turbulence-chemistry interactions at low temperature engine conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kundu, Prithwish; Ameen, Muhsin M.; Som, Sibendu
The role of turbulence-chemistry interaction in autoignition and flame stabilization is investigated for spray flames at low temperature combustion (LTC) conditions by performing high-fidelity three-dimensional computational fluid dynamics (CFD) simulations. A recently developed Tabulated Flamelet Model (TFM) is coupled with a large eddy simulation (LES) framework and validated across a range of Engine Combustion Network (ECN) ambient temperature conditions for n-dodecane fuel. High resolution grids with 0.0625 mm minimum cell size and 25 million total cell count are implemented using adaptive mesh refinement over the spray and combustion regions. Simulations with these grids and multiple LES realizations, with a 103more » species n-dodecane mechanism show good agreement with experimental data for all the ambient conditions investigated. This modeling approach with the computational cost advantage of tabulated chemistry is then extended towards understanding the auto-ignition and flame stabilization at an ambient temperature of 750 K. These low temperature conditions lead to substantially higher ignition delays and flame liftoff lengths, and significantly leaner combustion compared to conventional high temperature diesel combustion. These conditions also require the simulations to span significantly larger temporal and spatial dimensions thereby increasing the computational cost. The TFM approach is able to capture autoignition and flame liftoff length at the low temperature conditions. Significant differences with respect to mixing, species formation and flame stabilization are observed under low temperature compared to conventional diesel combustion. At higher ambient temperatures, formation of formaldehyde is observed in the rich region (phi > 1) followed by the formation of OH in the stoichiometric regions. Under low temperature conditions, formaldehyde is observed to form at leaner regions followed by the onset of OH formation in significantly lean regions of the flame. Qualitative differences between species formation and transient flame development for the high and low temperature conditions are presented. The two stage ignition process is further investigated by studying the species formation in mixture fraction space by solving 1D flamelet equations for different scalar dissipation rates and homogeneous reactor assumption. Results show that scalar dissipation causes these radicals to diffuse within the mixture fraction space. As a result, this significantly enhances ignition and plays a dominant role at such low temperature conditions which cannot be captured by the homogeneous reaction assumption based model.« less
Importance of turbulence-chemistry interactions at low temperature engine conditions
Kundu, Prithwish; Ameen, Muhsin M.; Som, Sibendu
2017-06-08
The role of turbulence-chemistry interaction in autoignition and flame stabilization is investigated for spray flames at low temperature combustion (LTC) conditions by performing high-fidelity three-dimensional computational fluid dynamics (CFD) simulations. A recently developed Tabulated Flamelet Model (TFM) is coupled with a large eddy simulation (LES) framework and validated across a range of Engine Combustion Network (ECN) ambient temperature conditions for n-dodecane fuel. High resolution grids with 0.0625 mm minimum cell size and 25 million total cell count are implemented using adaptive mesh refinement over the spray and combustion regions. Simulations with these grids and multiple LES realizations, with a 103more » species n-dodecane mechanism show good agreement with experimental data for all the ambient conditions investigated. This modeling approach with the computational cost advantage of tabulated chemistry is then extended towards understanding the auto-ignition and flame stabilization at an ambient temperature of 750 K. These low temperature conditions lead to substantially higher ignition delays and flame liftoff lengths, and significantly leaner combustion compared to conventional high temperature diesel combustion. These conditions also require the simulations to span significantly larger temporal and spatial dimensions thereby increasing the computational cost. The TFM approach is able to capture autoignition and flame liftoff length at the low temperature conditions. Significant differences with respect to mixing, species formation and flame stabilization are observed under low temperature compared to conventional diesel combustion. At higher ambient temperatures, formation of formaldehyde is observed in the rich region (phi > 1) followed by the formation of OH in the stoichiometric regions. Under low temperature conditions, formaldehyde is observed to form at leaner regions followed by the onset of OH formation in significantly lean regions of the flame. Qualitative differences between species formation and transient flame development for the high and low temperature conditions are presented. The two stage ignition process is further investigated by studying the species formation in mixture fraction space by solving 1D flamelet equations for different scalar dissipation rates and homogeneous reactor assumption. Results show that scalar dissipation causes these radicals to diffuse within the mixture fraction space. As a result, this significantly enhances ignition and plays a dominant role at such low temperature conditions which cannot be captured by the homogeneous reaction assumption based model.« less
NASA Astrophysics Data System (ADS)
Nikolaevich Lipatnikov, Andrei; Nishiki, Shinnosuke; Hasegawa, Tatsuya
2015-05-01
The linear relation between the mean rate of product creation and the mean scalar dissipation rate, derived in the seminal paper by K.N.C. Bray ['The interaction between turbulence and combustion', Proceedings of the Combustion Institute, Vol. 17 (1979), pp. 223-233], is the cornerstone for models of premixed turbulent combustion that deal with the dissipation rate in order to close the reaction rate. In the present work, this linear relation is straightforwardly validated by analysing data computed earlier in the 3D Direct Numerical Simulation (DNS) of three statistically stationary, 1D, planar turbulent flames associated with the flamelet regime of premixed combustion. Although the linear relation does not hold at the leading and trailing edges of the mean flame brush, such a result is expected within the framework of Bray's theory. However, the present DNS yields substantially larger (smaller) values of an input parameter cm (or K2 = 1/(2cm - 1)), involved by the studied linear relation, when compared to the commonly used value of cm = 0.7 (or K2 = 2.5). To gain further insight into the issue and into the eventual dependence of cm on mixture composition, the DNS data are combined with the results of numerical simulations of stationary, 1D, planar laminar methane-air flames with complex chemistry, with the results being reported in terms of differently defined combustion progress variables c, i.e. the normalised temperature, density, or mole fraction of CH4, O2, CO2 or H2O. Such a study indicates the dependence of cm both on the definition of c and on the equivalence ratio. Nevertheless, K2 and cm can be estimated by processing the results of simulations of counterpart laminar premixed flames. Similar conclusions were also drawn by skipping the DNS data, but invoking a presumed beta probability density function in order to evaluate cm for the differently defined c's and various equivalence ratios.
Assessing Model Assumptions for Turbulent Premixed Combustion at High Karlovitz Number
2015-09-03
number flamelet solutions are also shown (dashed line). ω̇C7H16(T ) ≈ ω̇C7H16(Tpeak) ω̇C7H16, lam (T ) ω̇C7H16, lam (Tpeak) . (35) Therefore, only the...Eq. 35 in Eq. 39, the turbulent flame speed can be approximated as ST ≈ S0L AT A 〈 ω̇C7H16 ω̇C7H16, lam 〉 Tpeak , (40) where 〈ω̇C7H16/ω̇C7H16, lam 〉Tpeak...drawing to illustrate Eq. 40. SeffF = 〈 ω̇C7H16/ω̇C7H16, lam 〉 Tpeak · S0L is used. differential diffusion has a limited effect on the turbulent surface
Flow/Soot-Formation Interactions in Nonbuoyant Laminar Diffusion Flames
NASA Technical Reports Server (NTRS)
Dai, Z.; Lin, K.-C.; Sunderland, P. B.; Xu, F.; Faeth, G. M.
2002-01-01
This is the final report of a research program considering interactions between flow and soot properties within laminar diffusion flames. Laminar diffusion flames were considered because they provide model flame systems that are far more tractable for theoretical and experimental studies than more practical turbulent diffusion flames. In particular, understanding the transport and chemical reaction processes of laminar flames is a necessary precursor to understanding these processes in practical turbulent flames and many aspects of laminar diffusion flames have direct relevance to turbulent diffusion flames through application of the widely recognized laminar flamelet concept of turbulent diffusion flames. The investigation was divided into three phases, considering the shapes of nonbuoyant round laminar jet diffusion flames in still air, the shapes of nonbuoyant round laminar jet diffusion flames in coflowing air, and the hydrodynamic suppression of soot formation in laminar diffusion flames.
Near-limit flame structures at low Lewis number
NASA Technical Reports Server (NTRS)
Ronney, Paul D.
1990-01-01
The characteristics of premixed gas flames in mixtures with low Lewis numbers near flammability limits were studied experimentally using a low-gravity environment to reduce buoyant convection. The behavior of such flames was found to be dominated by diffusive-thermal instabilities. For sufficiently reactive mixtures, cellular structures resulting from these instabilities were observed and found to spawn new cells in regular patterns. For less reactive mixtures, cells formed shortly after ignition but did not spawn new cells; instead these cells evolved into a flame structure composed of stationary, apparently stable spherical flamelets. Experimental observations are found to be in qualitative agreement with elementary analytical models based on the interaction of heat release due to chemical reaction, differential diffusion of thermal energy and mass, flame front curvature, and volumetric heat losses due to gas and/or soot radiation.
NASA Technical Reports Server (NTRS)
Ronney, Paul D.
1989-01-01
The characteristics of premixed gas flames in mixtures with low Lewis numbers, free of natural convection effects, were investigated and found to be dominated by diffusive-thermal instabilities. For sufficiently reactive mixtures, cellular structures resulting from these instabilities were observed and found to spawn new cells in regular patterns. For less reactive mixtures, cells formed shortly after ignition but did not spawn new cells; instead these cells evolved into a flame structure composed of stationary, apparently stable spherical flamelets. As a result of these phenomena, well-defined flammability limits were not observed. The experimental results are found to be in qualitative agreement with a simple analytical model based on the interaction of heat release due to chemical reaction, differential diffusion of thermal energy and mass, flame front curvature, and heat losses due to gas radiation.
Distributed Low Temperature Combustion: Fundamental Understanding of Combustion Regime Transitions
2016-09-07
behaviour as compared to ethanol. The latter fuel has also been considered along with methane. Work has also been performed on the further assessment of... behaviour as compared to ethanol. The latter fuel has also been considered along with methane. Work has also been performed on the further assess- ment of...identification of various combustion gas states. A range of Damköhler numbers (Da) from the conventional propagating flamelet regime well into the distributed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hicks, E. P.; Rosner, R., E-mail: eph2001@columbia.edu
In this paper, we provide support for the Rayleigh-Taylor-(RT)-based subgrid model used in full-star simulations of deflagrations in Type Ia supernovae explosions. We use the results of a parameter study of two-dimensional direct numerical simulations of an RT unstable model flame to distinguish between the two main types of subgrid models (RT or turbulence dominated) in the flamelet regime. First, we give scalings for the turbulent flame speed, the Reynolds number, the viscous scale, and the size of the burning region as the non-dimensional gravity (G) is varied. The flame speed is well predicted by an RT-based flame speed model.more » Next, the above scalings are used to calculate the Karlovitz number (Ka) and to discuss appropriate combustion regimes. No transition to thin reaction zones is seen at Ka = 1, although such a transition is expected by turbulence-dominated subgrid models. Finally, we confirm a basic physical premise of the RT subgrid model, namely, that the flame is fractal, and thus self-similar. By modeling the turbulent flame speed, we demonstrate that it is affected more by large-scale RT stretching than by small-scale turbulent wrinkling. In this way, the RT instability controls the flame directly from the large scales. Overall, these results support the RT subgrid model.« less
NASA Astrophysics Data System (ADS)
Chen, Peng; Guo, Shilong; Li, Yanchao; Zhang, Yutao
2017-03-01
In this paper, an experimental and numerical investigation of premixed methane/air flame dynamics in a closed combustion vessel with a thin obstacle is described. In the experiment, high-speed video photography and a pressure transducer are used to study the flame shape changes and pressure dynamics. In the numerical simulation, four sub-grid scale viscosity models and three sub-grid scale combustion models are evaluated for their individual prediction compared with the experimental data. High-speed photographs show that the flame propagation process can be divided into five stages: spherical flame, finger-shaped flame, jet flame, mushroom-shaped flame and bidirectional propagation flame. Compared with the other sub-grid scale viscosity models and sub-grid scale combustion models, the dynamic Smagorinsky-Lilly model and the power-law flame wrinkling model are better able to predict the flame behaviour, respectively. Thus, coupling the dynamic Smagorinsky-Lilly model and the power-law flame wrinkling model, the numerical results demonstrate that flame shape change is a purely hydrodynamic phenomenon, and the mushroom-shaped flame and bidirectional propagation flame are the result of flame-vortex interaction. In addition, the transition from "corrugated flamelets" to "thin reaction zones" is observed in the simulation.
CFD analysis of gas explosions vented through relief pipes.
Ferrara, G; Di Benedetto, A; Salzano, E; Russo, G
2006-09-21
Vent devices for gas and dust explosions are often ducted to safe locations by means of relief pipes. However, the presence of the duct increases the severity of explosion if compared to simply vented vessels (i.e. compared to cases where no duct is present). Besides, the identification of the key phenomena controlling the violence of explosion has not yet been gained. Multidimensional models coupling, mass, momentum and energy conservation equations can be valuable tools for the analysis of such complex explosion phenomena. In this work, gas explosions vented through ducts have been modelled by a two-dimensional (2D) axi-symmetric computational fluid dynamic (CFD) model based on the unsteady Reynolds Averaged Navier Stokes (RANS) approach in which the laminar, flamelet and distributed combustion models have been implemented. Numerical test have been carried out by varying ignition position, duct diameter and length. Results have evidenced that the severity of ducted explosions is mainly driven by the vigorous secondary explosion occurring in the duct (burn-up) rather than by the duct flow resistance or acoustic enhancement. Moreover, it has been found out that the burn-up affects explosion severity due to the reduction of venting rate rather than to the burning rate enhancement through turbulization.
NASA Technical Reports Server (NTRS)
Richardson, Brian R.; Braman, Kalem; West, Jeff
2016-01-01
NASA Marshall Space Flight Center (MSFC) has embarked upon a joint project with the Air Force to improve the state-of-the-art of space application combustion device design and operational understanding. One goal of the project is to design, build and hot-fire test a 40,000 pound-thrust Oxygen/Rocket Propellant-2 (RP-2) Oxygen-Rich staged engine at MSFC. The overall project goals afford the opportunity to test multiple different injector designs and experimentally evaluate the any effect on the engine performance and combustion dynamics. To maximize the available test resources and benefits, pre-test, combusting flow, Computational Fluid Dynamics (CFD) analysis was performed on the individual injectors to guide the design. The results of the CFD analysis were used to design the injectors for specific, targeted fluid dynamic features and the analysis results also provided some predictive input for acoustic and thermal analysis of the main Thrust Chamber Assembly (TCA). MSFC has developed and demonstrated the ability to utilize a computationally efficient, flamelet-based combustion model to guide the pre-test design of single-element Gas Centered Swirl Coaxial (GCSC) injectors. Previous, Oxygen/RP-2 simulation models utilizing the Loci-STREAM flow solver, were validated using single injector test data from the EC-1 Air Force test facility. The simulation effort herein is an extension of the validated, CFD driven, single-injector design approach applied to single injectors which will be part of a larger engine array. Time-accurate, Three-Dimensional, CFD simulations were performed for five different classes of injector geometries. Simulations were performed to guide the design of the injector to achieve a variety of intended performance goals. For example, two GCSC injectors were designed to achieve stable hydrodynamic behavior of the propellant circuits while providing the largest thermal margin possible within the design envelope. While another injector was designed to purposefully create a hydrodynamic instability in the fuel supply circuit as predicted by the CFD analysis. Future multi-injector analysis and testing will indicate what if any changes occur in the predicted behavior for the single-element injector when the same injector geometry is placed in a multi-element array.
Dynamics and structure of turbulent premixed flames
NASA Technical Reports Server (NTRS)
Bilger, R. W.; Swaminathan, N.; Ruetsch, G. R.; Smith, N. S. A.
1995-01-01
In earlier work (Mantel & Bilger, 1994) the structure of the turbulent premixed flame was investigated using statistics based on conditional averaging with the reaction progress variable as the conditioning variable. The DNS data base of Trouve and Poinsot (1994) was used in this investigation. Attention was focused on the conditional dissipation and conditional axial velocity in the flame with a view to modeling these quantities for use in the conditional moment closure (CMC) approach to analysis of kinetics in premixed flames (Bilger, 1993). Two remarkable findings were made: there was almost no acceleration of the axial velocity in the flame front itself; and the conditional scalar dissipation remained as high, or higher, than that found in laminar premixed flames. The first finding was surprising since in laminar flames all the fluid acceleration occurs through the flame front, and this could be expected also for turbulent premixed flames at the flamelet limit. The finding gave hope of inventing a new approach to the dynamics of turbulent premixed flames through use of rapid distortion theory or an unsteady Bernoulli equation. This could lead to a new second order closure for turbulent premixed flames. The second finding was contrary to our measurements with laser diagnostics in lean hydrocarbon flames where it is found that conditional scalar dissipation drops dramatically below that for laminar flamelets when the turbulence intensity becomes high. Such behavior was not explainable with a one-step kinetic model, even at non-unity Lewis number. It could be due to depletion of H2 from the reaction zone by preferential diffusion. The capacity of the flame to generate radicals is critically dependent on the levels of H2 present (Bilger, et al., 1991). It seemed that a DNS computation with a multistep reduced mechanism would be worthwhile if a way could be found to make this feasible. Truly innovative approaches to complex problems often come only when there is the opportunity to work close at hand with the (in this case numerical) experimental data. Not only can one spot patterns and relationships in the data which could be important, but one can also get to know the limitations of the technique being used, so that when the next experiment is being designed it will address resolvable questions. A three-year grant from the Australian Research Council has enabled us to develop a small capability at the University of Sydney to work on DNS of turbulent reacting flow, and to analyze data bases generated at CTR. Collaboration between the University of Sydney and CTR is essential to this project and finding a workable modus operandum for this collaboration, given the constraints involved, has been a major objective of the past year's effort. The overall objectives of the project are: (1) to obtain a quantitative understanding of the dynamics of turbulent premixed flames at high turbulence levels with a view to developing improved second order closure models; and (2) to carry out new DNS experiments on turbulent premixed flames using a carefully chosen multistep reduced mechanism for the chemical kinetics, with a view to elucidating the laser diagnostic findings that are contrary to the findings for DNS using one-step kinetics. In this first year the objectives have been to make the existing CTR data base more accessible to coworkers at the University of Sydney, to make progress on understanding the dynamics of the flame in this existing CTR data base, and to carefully construct a suitable multistep reduced mechanism for use in a new set of DNS experiments on turbulent premixed flames.
Modulation of a methane Bunsen flame by upstream perturbations
NASA Astrophysics Data System (ADS)
de Souza, T. Cardoso; Bastiaans, R. J. M.; De Goey, L. P. H.; Geurts, B. J.
2017-04-01
In this paper the effects of an upstream spatially periodic modulation acting on a turbulent Bunsen flame are investigated using direct numerical simulations of the Navier-Stokes equations coupled with the flamelet generated manifold (FGM) method to parameterise the chemistry. The premixed Bunsen flame is spatially agitated with a set of coherent large-scale structures of specific wave-number, K. The response of the premixed flame to the external modulation is characterised in terms of time-averaged properties, e.g. the average flame height ⟨H⟩ and the flame surface wrinkling ⟨W⟩. Results show that the flame response is notably selective to the size of the length scales used for agitation. For example, both flame quantities ⟨H⟩ and ⟨W⟩ present an optimal response, in comparison with an unmodulated flame, when the modulation scale is set to relatively low wave-numbers, 4π/L ≲ K ≲ 6π/L, where L is a characteristic scale. At the agitation scales where the optimal response is observed, the average flame height, ⟨H⟩, takes a clearly defined minimal value while the surface wrinkling, ⟨W⟩, presents an increase by more than a factor of 2 in comparison with the unmodulated reference case. Combined, these two response quantities indicate that there is an optimal scale for flame agitation and intensification of combustion rates in turbulent Bunsen flames.
NASA Technical Reports Server (NTRS)
Spinks, Debra (Compiler)
1997-01-01
This report contains the 1997 annual progress reports of the research fellows and students supported by the Center for Turbulence Research (CTR). Titles include: Invariant modeling in large-eddy simulation of turbulence; Validation of large-eddy simulation in a plain asymmetric diffuser; Progress in large-eddy simulation of trailing-edge turbulence and aeronautics; Resolution requirements in large-eddy simulations of shear flows; A general theory of discrete filtering for LES in complex geometry; On the use of discrete filters for large eddy simulation; Wall models in large eddy simulation of separated flow; Perspectives for ensemble average LES; Anisotropic grid-based formulas for subgrid-scale models; Some modeling requirements for wall models in large eddy simulation; Numerical simulation of 3D turbulent boundary layers using the V2F model; Accurate modeling of impinging jet heat transfer; Application of turbulence models to high-lift airfoils; Advances in structure-based turbulence modeling; Incorporating realistic chemistry into direct numerical simulations of turbulent non-premixed combustion; Effects of small-scale structure on turbulent mixing; Turbulent premixed combustion in the laminar flamelet and the thin reaction zone regime; Large eddy simulation of combustion instabilities in turbulent premixed burners; On the generation of vorticity at a free-surface; Active control of turbulent channel flow; A generalized framework for robust control in fluid mechanics; Combined immersed-boundary/B-spline methods for simulations of flow in complex geometries; and DNS of shock boundary-layer interaction - preliminary results for compression ramp flow.
A large eddy simulation scheme for turbulent reacting flows
NASA Technical Reports Server (NTRS)
Gao, Feng
1993-01-01
The recent development of the dynamic subgrid-scale (SGS) model has provided a consistent method for generating localized turbulent mixing models and has opened up great possibilities for applying the large eddy simulation (LES) technique to real world problems. Given the fact that the direct numerical simulation (DNS) can not solve for engineering flow problems in the foreseeable future (Reynolds 1989), the LES is certainly an attractive alternative. It seems only natural to bring this new development in SGS modeling to bear on the reacting flows. The major stumbling block for introducing LES to reacting flow problems has been the proper modeling of the reaction source terms. Various models have been proposed, but none of them has a wide range of applicability. For example, some of the models in combustion have been based on the flamelet assumption which is only valid for relatively fast reactions. Some other models have neglected the effects of chemical reactions on the turbulent mixing time scale, which is certainly not valid for fast and non-isothermal reactions. The probability density function (PDF) method can be usefully employed to deal with the modeling of the reaction source terms. In order to fit into the framework of LES, a new PDF, the large eddy PDF (LEPDF), is introduced. This PDF provides an accurate representation for the filtered chemical source terms and can be readily calculated in the simulations. The details of this scheme are described.
Modelling thermal radiation from one-meter diameter methane pool fires
NASA Astrophysics Data System (ADS)
Consalvi, J. L.; Demarco, R.
2012-06-01
The first objective of this article is to implement a comprehensive radiation model in order to predict the radiant fractions and radiative fluxes on remote surfaces in large-scale methane pool fires. The second aim is to quantify the importance of Turbulence-Radiation Interactions (TRIs) in such buoyant flames. The fire-induced flow is modelled by using a buoyancy-modified k-ɛ model and the Steady Laminar Flamelet (SLF) model coupled with a presumed probability density function (pdf) approach. Spectral radiation is modelled by using the Full-Spectrum Correlated-k (FSCK) method. TRIs are taken into account by considering the Optically-Thin Fluctuation Approximation (OTFA). The emission term and the mean absorption coefficient are closed by using a presumed pdf of the mixture fraction, scalar dissipation rate and enthalpy defect. Two 1m-diameter fires with Heat Release Rates (HRR) of 49 kW and 162 kW were simulated. Predicted radiant fractions and radiative heat fluxes are found in reasonable agreement with experimental data. The importance of TRIs is evidenced, computed radiant fractions and radiative heat fluxes being considerably higher than those obtained from calculations based on mean properties. Finally, model results show that the complete absorption coefficient-Planck function correlation should be considered in order to properly take into account the influence of TRIs on the emission term, whereas the absorption coefficient self-correlation in the absorption term reduces significantly the radiant fractions.
Velocity and scalar fields of turbulent premixed flame in stagnation flow
NASA Astrophysics Data System (ADS)
Cho, P.; Law, C. K.; Cheng, R. K.; Shepherd, I. G.
1988-08-01
Detailed experimental measurements of the scalar and velocity statistics of premixed methane/air flames stabilized by a stagnation plant are reported. Conditioned and unconditioned velocity of two components and the reaction progress variables are measured by using a two-component laser Doppler velocimetry techniques and Mie scattering techniques, respectively. Experimental conditions cover equivalence ratios of 0.9 and 1.0, incident turbulence intensities of 0.3 to 0.45 m/s, and global stretch rates of 100 to 150 sec sup minus 1. The experimental results are analyzed in the context of the Bray-Moss-Libby flamelet model of these flames. The results indicate that there is no turbulence production within the turbulent flame brush and the second and third order turbulent transport terms are reduced to functions of the difference between the conditioned mean velocity. The result of normalization of these relative velocities by the respective velocity increase across laminar flames suggest that the mean unconditioned velocity profiles are self-similar.
Experimental, theoretical, and numerical studies of small scale combustion
NASA Astrophysics Data System (ADS)
Xu, Bo
Recently, the demand increased for the development of microdevices such as microsatellites, microaerial vehicles, micro reactors, and micro power generators. To meet those demands the biggest challenge is obtaining stable and complete combustion at relatively small scale. To gain a fundamental understanding of small scale combustion in this thesis, thermal and kinetic coupling between the gas phase and the structure at meso and micro scales were theoretically, experimentally, and numerically studied; new stabilization and instability phenomena were identified; and new theories for the dynamic mechanisms of small scale combustion were developed. The reduction of thermal inertia at small scale significantly reduces the response time of the wall and leads to a strong flame-wall coupling and extension of burning limits. Mesoscale flame propagation and extinction in small quartz tubes were theoretically, experimentally and numerically studied. It was found that wall-flame interaction in mesoscale combustion led to two different flame regimes, a heat-loss dominant fast flame regime and a wall-flame coupling slow flame regime. The nonlinear transition between the two flame regimes was strongly dependent on the channel width and flow velocity. It is concluded that the existence of multiple flame regimes is an inherent phenomenon in mesoscale combustion. In addition, all practical combustors have variable channel width in the direction of flame propagation. Quasi-steady and unsteady propagations of methane and propane-air premixed flames in a mesoscale divergent channel were investigated experimentally and theoretically. The emphasis was the impact of variable cross-section area and the flame-wall coupling on the flame transition between different regimes and the onset of flame instability. For the first time, spinning flames were experimentally observed for both lean and rich methane and propane-air mixtures in a broad range of equivalence ratios. An effective Lewis number to describe the competition between the mass transport in gas phase and the heat conduction in gas and solid phases was defined. Experimental observation and theoretical analysis suggested that the flame-wall coupling significantly increased the effective Lewis number and led to a new mechanism to promote the thermal diffusion instability. Due to the short flow residence time in small scale combustion, reactants, and oxidizers may not be able to be fully premixed before combustion. As such, non-premixed combustion plays an important role. Non-premixed mixing layer combustion within a constrained mesoscale channel was studied. Depending on the flow rate, it was found that there were two different flame regimes, an unsteady bimodal flame regime and a flame street regime with multiple stable triple flamelets. This multiple triple flame structure was identified experimentally for the first time. A scaling analytical model was developed to qualitatively explain the mechanism of flame streets. The effects of flow velocity, wall temperature, and Lewis number on the distance between flamelets and the diffusion flame length were also investigated. The results showed that the occurrence of flame street regimes was a combined effect of heat loss, curvature, diffusion, and dilution. To complete this thesis, experiments were conducted to measure the OH concentration using Planar Laser Induced Fluorescence (PLIF) in a confined mesoscale combustor. Some preliminary results have been obtained for the OH concentration of flamelets in a flame street. When the scale of the micro reactor is further reduced, the rarefied gas effect may become significant. In this thesis, a new concentration slip model to describe the rarefied gas effect on the species transport in microscale chemical reactors was obtained. The present model is general and recovers the existing models in the limiting cases. The analytical results showed the concentration slip was dominated by two different mechanisms, the surface reaction induced concentration slip (RIC) and the temperature slip induced concentration slip (TIC). It is found that the magnitude of RIC slip was proportional to the product of the Damkohler number and Knudsen number. The results showed the impact of reaction induced concentration slip (RIC slip) effects on catalytic reactions strongly depended on the Damkohler number, the Knudsen number, and the surface accommodation coefficient.
A New LES/PDF Method for Computational Modeling of Turbulent Reacting Flows
NASA Astrophysics Data System (ADS)
Turkeri, Hasret; Muradoglu, Metin; Pope, Stephen B.
2013-11-01
A new LES/PDF method is developed for computational modeling of turbulent reacting flows. The open source package, OpenFOAM, is adopted as the LES solver and combined with the particle-based Monte Carlo method to solve the LES/PDF model equations. The dynamic Smagorinsky model is employed to account for the subgrid-scale motions. The LES solver is first validated for the Sandia Flame D using a steady flamelet method in which the chemical compositions, density and temperature fields are parameterized by the mean mixture fraction and its variance. In this approach, the modeled transport equations for the mean mixture fraction and the square of the mixture fraction are solved and the variance is then computed from its definition. The results are found to be in a good agreement with the experimental data. Then the LES solver is combined with the particle-based Monte Carlo algorithm to form a complete solver for the LES/PDF model equations. The in situ adaptive tabulation (ISAT) algorithm is incorporated into the LES/PDF method for efficient implementation of detailed chemical kinetics. The LES/PDF method is also applied to the Sandia Flame D using the GRI-Mech 3.0 chemical mechanism and the results are compared with the experimental data and the earlier PDF simulations. The Scientific and Technical Research Council of Turkey (TUBITAK), Grant No. 111M067.
Simulations of sooting turbulent jet flames using a hybrid flamelet/stochastic Eulerian field method
NASA Astrophysics Data System (ADS)
Consalvi, Jean-Louis; Nmira, Fatiha; Burot, Daria
2016-03-01
The stochastic Eulerian field method is applied to simulate 12 turbulent C1-C3 hydrocarbon jet diffusion flames covering a wide range of Reynolds numbers and fuel sooting propensities. The joint scalar probability density function (PDF) is a function of the mixture fraction, enthalpy defect, scalar dissipation rate and representative soot properties. Soot production is modelled by a semi-empirical acetylene/benzene-based soot model. Spectral gas and soot radiation is modelled using a wide-band correlated-k model. Emission turbulent radiation interactions (TRIs) are taken into account by means of the PDF method, whereas absorption TRIs are modelled using the optically thin fluctuation approximation. Model predictions are found to be in reasonable agreement with experimental data in terms of flame structure, soot quantities and radiative loss. Mean soot volume fractions are predicted within a factor of two of the experiments whereas radiant fractions and peaks of wall radiative fluxes are within 20%. The study also aims to assess approximate radiative models, namely the optically thin approximation (OTA) and grey medium approximation. These approximations affect significantly the radiative loss and should be avoided if accurate predictions of the radiative flux are desired. At atmospheric pressure, the relative errors that they produced on the peaks of temperature and soot volume fraction are within both experimental and model uncertainties. However, these discrepancies are found to increase with pressure, suggesting that spectral models describing properly the self-absorption should be considered at over-atmospheric pressure.
Large eddy simulations of a bluff-body stabilized hydrogen-methane jet flame
NASA Astrophysics Data System (ADS)
Drozda, Tomasz; Pope, Stephen
2005-11-01
Large eddy simulation (LES) is conducted of the turbulent bluff-body stabilized hydrogen-methane flame as considered in the experiments of the Combustion Research Facility at the Sandia National Laboratories and of the Thermal Research Group at the University of Sydney [1]. Both, reacting and non-reacting flows are considered. The subgrid scale (SGS) closure in LES is based on the scalar filtered mass density function (SFMDF) methodology [2]. A flamelet model is used to relate the chemical composition to the mixture fraction. The modeled SFMDF transport equation is solved by a hybrid finite-difference (FD) / Monte Carlo (MC) scheme. The FD component of the hybrid solver is validated by comparisons of the experimentally available flow statistics with those predicted by LES. The results via this method capture important features of the flames as observed experimentally.[1] A. R. Masri, R. W. Dibble, and R. S. Barlow. The structure of turbulent nonpremixed flames revealed by Raman-Rayleigh-LIF measurements. Prog. Energy Combust. Sci., 22:307--362, 1996. [2] F. A. Jaberi, P. J. Colucci, S. James, P. Givi, and S. B. Pope. Filtered mass density function for large eddy simulation of turbulent reacting flows. J. Fluid Mech., 401:85--121, 1999.
An abstraction layer for efficient memory management of tabulated chemistry and flamelet solutions
NASA Astrophysics Data System (ADS)
Weise, Steffen; Messig, Danny; Meyer, Bernd; Hasse, Christian
2013-06-01
A large number of methods for simulating reactive flows exist, some of them, for example, directly use detailed chemical kinetics or use precomputed and tabulated flame solutions. Both approaches couple the research fields computational fluid dynamics and chemistry tightly together using either an online or offline approach to solve the chemistry domain. The offline approach usually involves a method of generating databases or so-called Lookup-Tables (LUTs). As these LUTs are extended to not only contain material properties but interactions between chemistry and turbulent flow, the number of parameters and thus dimensions increases. Given a reasonable discretisation, file sizes can increase drastically. The main goal of this work is to provide methods that handle large database files efficiently. A Memory Abstraction Layer (MAL) has been developed that handles requested LUT entries efficiently by splitting the database file into several smaller blocks. It keeps the total memory usage at a minimum using thin allocation methods and compression to minimise filesystem operations. The MAL has been evaluated using three different test cases. The first rather generic one is a sequential reading operation on an LUT to evaluate the runtime behaviour as well as the memory consumption of the MAL. The second test case is a simulation of a non-premixed turbulent flame, the so-called HM1 flame, which is a well-known test case in the turbulent combustion community. The third test case is a simulation of a non-premixed laminar flame as described by McEnally in 1996 and Bennett in 2000. Using the previously developed solver 'flameletFoam' in conjunction with the MAL, memory consumption and the performance penalty introduced were studied. The total memory used while running a parallel simulation was reduced significantly while the CPU time overhead associated with the MAL remained low.
Hysteresis of mode transition in a dual-struts based scramjet
NASA Astrophysics Data System (ADS)
Yan, Zhang; Shaohua, Zhu; Bing, Chen; Xu, Xu
2016-11-01
Tests and numerical simulations were performed to investigate the combustion performance of a dual-staged scramjet combustor. High enthalpy vitiated inflow at a total temperature of 1231 K was supplied using a hydrogen-combustion heater. The inlet Mach number was 2.0. Liquid kerosene was injected into the combustor using the dual crossed struts. Three-dimensional Reynolds averaged reacting flow was solved using a two-equation k-ω SST turbulence model to calculate the effect of turbulent stress, and a partial-premixed flamelet model to model the effects of turbulence-chemistry interactions. The discrete phase model was utilized to simulate the fuel atomization and vaporization. For simplicity, the n-decane was chosen as the surrogate fuel with a reaction mechanism of 40 species and 141 steps. The predicted wall pressure profiles at three fuel injection schemes basically captured the axial varying trend of the experimental data. With the downstream equivalence ratio held constant, the upstream equivalence ratio was numerically increased from 0.1 to 0.4 until a steady combustion was obtained. Subsequently, the upstream equivalence ratio was decreased from 0.4 to 0.1 once again. Two ramjet modes with different wall pressure profiles and corresponding flow structures were captured under the identical upstream equivalence ratio of 0.1, illustrating an obvious hysteresis phenomenon. The mechanism of this hysteresis was explained by the transition hysteresis of the pre-combustion shock train in the isolator.
NASA Astrophysics Data System (ADS)
Mameri, A.; Tabet, F.; Hadef, A.
2017-08-01
This study addresses the influence of several operating conditions (composition and ambient pressure) on biogas diffusion flame structure and NO emissions with particular attention on thermal and chemical effect of CO2. The biogas flame is modeled by a counter flow diffusion flame and analyzed in mixture fraction space using flamelet approach. The GRI Mech-3.0 mechanism that involves 53 species and 325 reactions is adopted for the oxidation chemistry. It has been observed that flame properties are very sensitive to biogas composition and pressure. CO2 addition decreases flame temperature by both thermal and chemical effects. Added CO2 may participate in chemical reaction due to thermal dissociation (chemical effect). Excessively supplied CO2 plays the role of pure diluent (thermal effect). The ambient pressure rise increases temperature and reduces flame thickness, radiation losses and dissociation amount. At high pressure, recombination reactions coupled with chain carrier radicals reduction, diminishes NO mass fraction.
Flame Shapes of Luminous NonBuoyant Laminar Coflowing Jet Diffusion Flames
NASA Technical Reports Server (NTRS)
Lin, K.-C.; Faeth, G. M.
1999-01-01
Laminar diffusion flames are of interest as model flame systems that are more tractable for analysis and experiments than practical turbulent diffusion flames. Certainly understanding laminar flames must precede understanding more complex turbulent flames while man'y laminar diffusion flame properties are directly relevant to turbulent diffusion flames using laminar flamelet concepts. Laminar diffusion flame shapes have been of interest since the classical study of Burke and Schumann because they involve a simple nonintrusive measurement that is convenient for evaluating flame structure predictions. Motivated by these observations, the shapes of laminar flames were considered during the present investigation. The present study was limited to nonbuoyant flames because most practical flames are not buoyant. Effects of buoyancy were minimized by observing flames having large flow velocities at small pressures. Present methods were based on the study of the shapes of nonbu,3yant round laminar jet diffusion flames of Lin et al. where it was found that a simple analysis due to Spalding yielded good predictions of the flame shapes reported by Urban et al. and Sunderland et al.
NASA Astrophysics Data System (ADS)
Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.
2016-07-01
Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Tianfeng
The goal of the proposed research is to create computational flame diagnostics (CFLD) that are rigorous numerical algorithms for systematic detection of critical flame features, such as ignition, extinction, and premixed and non-premixed flamelets, and to understand the underlying physicochemical processes controlling limit flame phenomena, flame stabilization, turbulence-chemistry interactions and pollutant emissions etc. The goal has been accomplished through an integrated effort on mechanism reduction, direct numerical simulations (DNS) of flames at engine conditions and a variety of turbulent flames with transport fuels, computational diagnostics, turbulence modeling, and DNS data mining and data reduction. The computational diagnostics are primarily basedmore » on the chemical explosive mode analysis (CEMA) and a recently developed bifurcation analysis using datasets from first-principle simulations of 0-D reactors, 1-D laminar flames, and 2-D and 3-D DNS (collaboration with J.H. Chen and S. Som at Argonne, and C.S. Yoo at UNIST). Non-stiff reduced mechanisms for transportation fuels amenable for 3-D DNS are developed through graph-based methods and timescale analysis. The flame structures, stabilization mechanisms, local ignition and extinction etc., and the rate controlling chemical processes are unambiguously identified through CFLD. CEMA is further employed to segment complex turbulent flames based on the critical flame features, such as premixed reaction fronts, and to enable zone-adaptive turbulent combustion modeling.« less
Particle-Image Velocimetry in Microgravity Laminar Jet Diffusion Flames
NASA Technical Reports Server (NTRS)
Sunderland, P. B.; Greenberg, P. S.; Urban, D. L.; Wernet, M. P.; Yanis, W.
1999-01-01
This paper discusses planned velocity measurements in microgravity laminar jet diffusion flames. These measurements will be conducted using Particle-Image Velocimetry (PIV) in the NASA Glenn 2.2-second drop tower. The observations are of fundamental interest and may ultimately lead to improved efficiency and decreased emissions from practical combustors. The velocity measurements will support the evaluation of analytical and numerical combustion models. There is strong motivation for the proposed microgravity flame configuration. Laminar jet flames are fundamental to combustion and their study has contributed to myriad advances in combustion science, including the development of theoretical, computational and diagnostic combustion tools. Nonbuoyant laminar jet flames are pertinent to the turbulent flames of more practical interest via the laminar flamelet concept. The influence of gravity on these flames is deleterious: it complicates theoretical and numerical modeling, introduces hydrodynamic instabilities, decreases length scales and spatial resolution, and limits the variability of residence time. Whereas many normal-gravity laminar jet diffusion flames have been thoroughly examined (including measurements of velocities, temperatures, compositions, sooting behavior and emissive and absorptive properties), measurements in microgravity gas-jet flames have been less complete and, notably, have included only cursory velocity measurements. It is envisioned that our velocity measurements will fill an important gap in the understanding of nonbuoyant laminar jet flames.
Afterburning in spherical premixed turbulent explosions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, D.; Lawes, M.; Scott, M.J.
1994-12-01
During the early stages of spherical turbulent flame propagation, more than half of the gas behind the visible flame front may be unburned. Previous models of the afterburning of the gas behind the apparent flame front have been extended in the present work, to include the effects of flame quenching, consequent upon localized flame stretch. The predictions of the model cover, the spatial and temporal variations of the fraction burned, the flame propagation rate, and the mass burning rate. They are all in dimensionless form and are well supported by associated experimental measurements in a fan-stirred bomb with controlled turbulence.more » The proportion of the gas that is unburned decreases with time and increases with the product of the Karlovitz stretch factor and the Lewis number. Simultaneous photographs were taken of the spherical schlieren image and of that due to Mie scattering from small seed particles in a thin laser sheet that sectioned the spherical flame. These clearly showed the amount of unburned gas within the sphere and, along with other evidence suggest laminar flamelet burning across a scale of distance which is close to the Taylor confirm the predictions of the fraction of gas unburned and of the rate at which it is burning.« less
Heat and mass transfer in flames
NASA Technical Reports Server (NTRS)
Faeth, G. M.
1986-01-01
Heat- and mass-transfer processes in turbulent diffusion flames are discussed, considering turbulent mixing and the structure of single-phase flames, drop processes in spray flames, and nonluminous and luminous flame radiation. Interactions between turbulence and other phenomena are emphasized, concentrating on past work of the author and his associates. The conserved-scalar formalism, along with the laminar-flamelet approximation, is shown to provide reasonable estimates of the structure of gas flames, with modest levels of empiricism. Extending this approach to spray flames has highlighted the importance of drop/turbulence interactions; e.g., turbulent dispersion of drops, modification of turbulence by drops, etc. Stochastic methods being developed to treat these phenomena are yielding encouraging results.
NASA Astrophysics Data System (ADS)
Lipatnikov, Andrei N.; Chomiak, Jerzy; Sabelnikov, Vladimir A.; Nishiki, Shinnosuke; Hasegawa, Tatsuya
2018-01-01
Data obtained in 3D direct numerical simulations of statistically planar, 1D weakly turbulent flames characterised by different density ratios σ are analysed to study the influence of thermal expansion on flame surface area and burning rate. Results show that, on the one hand, the pressure gradient induced within a flame brush owing to heat release in flamelets significantly accelerates the unburned gas that deeply intrudes into the combustion products in the form of an unburned mixture finger, thus causing large-scale oscillations of the burning rate and flame brush thickness. Under the conditions of the present simulations, the contribution of this mechanism to the creation of the flame surface area is substantial and is increased by σ, thus implying an increase in the burning rate by σ. On the other hand, the total flame surface areas simulated at σ = 7.53 and 2.5 are approximately equal. The apparent inconsistency between these results implies the existence of another thermal expansion effect that reduces the influence of σ on the flame surface area and burning rate. Investigation of the issue shows that the flow acceleration by the combustion-induced pressure gradient not only creates the flame surface area by pushing the finger tip into the products, but also mitigates wrinkling of the flame surface (the side surface of the finger) by turbulent eddies. The latter effect is attributed to the high-speed (at σ = 7.53) axial flow of the unburned gas, which is induced by the axial pressure gradient within the flame brush (and the finger). This axial flow acceleration reduces the residence time of a turbulent eddy in an unburned zone of the flame brush (e.g. within the finger). Therefore, the capability of the eddy for wrinkling the flamelet surface (e.g. the side finger surface) is weakened owing to a shorter residence time.
NASA Astrophysics Data System (ADS)
Lee, Jaeseo; Lee, Gwang G.; Huh, Kang Y.
2014-12-01
This paper presents validation of new analytical expressions for the turbulent burning velocity, ST, based on asymptotic behavior at the leading edge (LE) in turbulent premixed combustion. Reaction and density variation are assumed to be negligible at the LE to avoid the cold boundary difficulty in the statistically steady state. Good agreement is shown for the slopes, dST/du', with respect to Lc/δf at low turbulence, with both normalized by those of the reference cases. δf is the inverse of the maximum gradient of reaction progress variable through an unstretched laminar flame, and Lc is the characteristic length scale given as burner diameter or measured integral length scale. Comparison is made for thirty-five datasets involving different fuels, equivalence ratios, H2 fractions in fuel, pressures, and integral length scales from eight references [R. C. Aldredge et al., "Premixed-flame propagation in turbulent Taylor-Couette flow," Combust. Flame 115, 395 (1998); M. Lawes et al., "The turbulent burning velocity of iso-octane/air mixtures," Combust. Flame 159, 1949 (2012); H. Kido et al., "Influence of local flame displacement velocity on turbulent burning velocity," Proc. Combust. Inst. 29, 1855 (2002); J. Wang et al., "Correlation of turbulent burning velocity for syngas/air mixtures at high pressure up to 1.0 MPa," Exp. Therm. Fluid Sci. 50, 90 (2013); H. Kobayashi et al., "Experimental study on general correlation of turbulent burning velocity at high pressure," Proc. Combust. Inst. 27, 941 (1998); C. W. Chiu et al., "High-pressure hydrogen/carbon monoxide syngas turbulent burning velocities measured at constant turbulent Reynolds numbers," Int. J. Hydrogen Energy 37, 10935 (2012); P. Venkateswaran et al., "Pressure and fuel effects on turbulent consumption speeds of H2/CO blends," Proc. Combust. Inst. 34, 1527 (2013); M. Fairweather et al., "Turbulent burning rates of methane and methane-hydrogen mixtures," Combust. Flame 156, 780 (2009)]. The turbulent burning velocity is shown to increase as the flamelet thickness, δf, decreases at a high pressure, for an equivalence ratio slightly rich or close to stoichiometric and for mixture of a high H2 fraction. Two constants involved are C to scale turbulent diffusivity as a product of turbulent intensity and characteristic length scale and Cs to relate δf with the mean effective Lm. L m = (D m u / SL u 0) is the scale of exponential decay at the LE of an unstretched laminar flame. The combined constant, KC/Cs, is adjusted to match measured turbulent burning velocities at low turbulence in each of the eight different experimental setups. All measured S T / SL u 0 values follow the line, KDtu/Dmu + 1, at low turbulent intensities and show bending below the line due to positive mean curvature and broadened flamelet thickness at high turbulent intensities. Further work is required to determine the constants, Cs and K, and the factor, (L m / Lm * - L m (∇ ṡ n) f), that is responsible for bending in different conditions of laminar flamelet and incoming turbulence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habibi, A.; Merci, B.; Roekaerts, D.
2007-10-15
Numerical simulation results are presented for two axisymmetric, nonluminous turbulent piloted jet diffusion flames: Sandia Flame D (SFD) and Delft Flame III (DFIII). Turbulence is represented by a Reynolds stress transport model, while chemistry is modeled by means of steady laminar flamelets. We use the preassumed PDF approach for turbulence-chemistry interaction. A weighted sum of gray gases model is used for the gas radiative properties. The radiative transfer equation is solved using the discrete ordinates method in the conservative finite-volume formulation. The radiative loss leads to a decrease in mean temperature, but does not significantly influence the flow and mixingmore » fields, in terms either of mean values or of rms values of fluctuations. A systematic analysis of turbulence-radiation interaction (TRI) is carried out. By considering five different TRI formulations, and comparing also with a simple optically thin model, individual TRI contributions are isolated and quantified. For both flames, effects are demonstrated of (1) influence of temperature fluctuations on the mean Planck function, (2) temperature and composition fluctuations on the mean absorption coefficient, and (3) correlation between absorption coefficient and Planck function. The strength of the last effect is stronger in DFIII than in SFD, because of stronger turbulence-chemistry interaction and lower mean temperature in DFIII. The impact of the choice of TRI model on the prediction of the temperature-sensitive minor species NO is determined in a postprocessing step with fixed flow and mixing fields. Best agreement for NO is obtained using the most complete representation of TRI. (author)« less
Parallel Adaptive Simulation of Detonation Waves Using a Weighted Essentially Non-Oscillatory Scheme
NASA Astrophysics Data System (ADS)
McMahon, Sean
The purpose of this thesis was to develop a code that could be used to develop a better understanding of the physics of detonation waves. First, a detonation was simulated in one dimension using ZND theory. Then, using the 1D solution as an initial condition, a detonation was simulated in two dimensions using a weighted essentially non-oscillatory scheme on an adaptive mesh with the smallest lengthscales being equal to 2-3 flamelet lengths. The code development in linking Chemkin for chemical kinetics to the adaptive mesh refinement flow solver was completed. The detonation evolved in a way that, qualitatively, matched the experimental observations, however, the simulation was unable to progress past the formation of the triple point.
An Experimental Study of n-Heptane and JP-7 Extinction Limits in an Opposed Jet Burner
NASA Technical Reports Server (NTRS)
Convery, Janet L.; Pellett, Gerald L.; O'Brien, Walter F., Jr.; Wilson, Lloyd G.; Williams, John
2005-01-01
Propulsion engine combustor design and analysis requires experimentally verified data on the chemical kinetics of fuel. Among the important data is the combustion extinction limit as measured by observed maximum flame strain rate. The extinction limit relates to the ability to maintain a flame in a combustor during operation. Extinction limit data can be obtained for a given fuel by means of a laminar flame experiment using an opposed jet burner (OJB). Laminar extinction limit data can be applied to the turbulent application of a combustor via laminar flamelet modeling. The OJB consists of two axi-symmetric tubes (one for fuel and one for oxidizer), which produce a flat, disk-like counter-flow diffusion flame. This paper presents results of experiments to measure extinction limits for n-heptane and the military specification fuel JP-7, obtained from an OJB. JP-7 is an Air Force-developed fuel that continues to be important in the area of hypersonics. Because of its distinct properties it is currently the hydrocarbon fuel of choice for use in Scramjet engines. This study provides much-desired data for JP-7, for which very little information previously existed. The interest in n-heptane is twofold. First, there has been a significant amount of previous extinction limit study and resulting data with this fuel. Second, n-heptane (C7H16) is a pure substance, and therefore does not vary in composition as does JP-7, which is a mixture of several different hydrocarbons. These two facts allow for a baseline to be established by comparing the new OJB results to those previously taken. Additionally, the data set for n-heptane, which previously existed for mixtures up to 26 mole percent in nitrogen, is completed up to 100% n-heptane. The extinction limit data for the two fuels are compared, and complete experimental results are included.
NASA Astrophysics Data System (ADS)
Haworth, Daniel
2013-11-01
The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.
Large Eddy Simulations of the Vortex-Flame Interaction in a Turbulent Swirl Burner
NASA Astrophysics Data System (ADS)
Lu, Zhen; Elbaz, Ayman M.; Hernandez Perez, Francisco E.; Roberts, William L.; Im, Hong G.
2017-11-01
A series of swirl-stabilized partially premixed flames are simulated using large eddy simulation (LES) along with the flamelet/progress variable (FPV) model for combustion. The target burner has separate and concentric methane and air streams, with methane in the center and the air flow swirled through the tangential inlets. The flame is lifted in a straight quarl, leading to a partially premixed state. By fixing the swirl number and air flow rate, the fuel jet velocity is reduced to study flame stability as the flame approaches the lean blow-off limit. Simulation results are compared against measured data, yielding a generally good agreement on the velocity, temperature, and species mass fraction distributions. The proper orthogonal decomposition (POD) method is applied on the velocity and progress variable fields to analyze the dominant unsteady flow structure, indicating a coupling between the precessing vortex core (PVC) and the flame. The effects of vortex-flame interactions on the stabilization of the lifted swirling flame are also investigated. For the stabilization of the lifted swirling flame, the effects of convection, enhanced mixing, and flame stretching introduced by the PVC are assessed based on the numerical results. This research work was sponsored by King Abdullah University of Science and Technology (KAUST) and used computational resources at KAUST Supercomputing Laboratory.
Turbulent flame-wall interaction: a DNS study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jackie; Hawkes, Evatt R; Sankaran, Ramanan
2010-01-01
A turbulent flame-wall interaction (FWI) configuration is studied using three-dimensional direct numerical simulation (DNS) and detailed chemical kinetics. The simulations are used to investigate the effects of the wall turbulent boundary layer (i) on the structure of a hydrogen-air premixed flame, (ii) on its near-wall propagation characteristics and (iii) on the spatial and temporal patterns of the convective wall heat flux. Results show that the local flame thickness and propagation speed vary between the core flow and the boundary layer, resulting in a regime change from flamelet near the channel centreline to a thickened flame at the wall. This findingmore » has strong implications for the modelling of turbulent combustion using Reynolds-averaged Navier-Stokes or large-eddy simulation techniques. Moreover, the DNS results suggest that the near-wall coherent turbulent structures play an important role on the convective wall heat transfer by pushing the hot reactive zone towards the cold solid surface. At the wall, exothermic radical recombination reactions become important, and are responsible for approximately 70% of the overall heat release rate at the wall. Spectral analysis of the convective wall heat flux provides an unambiguous picture of its spatial and temporal patterns, previously unobserved, that is directly related to the spatial and temporal characteristic scalings of the coherent near-wall turbulent structures.« less
NASA Technical Reports Server (NTRS)
Selle, L. C.; Bellan, Josette
2006-01-01
Transitional databases from Direct Numerical Simulation (DNS) of three-dimensional mixing layers for single-phase flows and two-phase flows with evaporation are analyzed and used to examine the typical hypothesis that the scalar dissipation Probability Distribution Function (PDF) may be modeled as a Gaussian. The databases encompass a single-component fuel and four multicomponent fuels, two initial Reynolds numbers (Re), two mass loadings for two-phase flows and two free-stream gas temperatures. Using the DNS calculated moments of the scalar-dissipation PDF, it is shown, consistent with existing experimental information on single-phase flows, that the Gaussian is a modest approximation of the DNS-extracted PDF, particularly poor in the range of the high scalar-dissipation values, which are significant for turbulent reaction rate modeling in non-premixed flows using flamelet models. With the same DNS calculated moments of the scalar-dissipation PDF and making a change of variables, a model of this PDF is proposed in the form of the (beta)-PDF which is shown to approximate much better the DNS-extracted PDF, particularly in the regime of the high scalar-dissipation values. Several types of statistical measures are calculated over the ensemble of the fourteen databases. For each statistical measure, the proposed (beta)-PDF model is shown to be much superior to the Gaussian in approximating the DNS-extracted PDF. Additionally, the agreement between the DNS-extracted PDF and the (beta)-PDF even improves when the comparison is performed for higher initial Re layers, whereas the comparison with the Gaussian is independent of the initial Re values. For two-phase flows, the comparison between the DNS-extracted PDF and the (beta)-PDF also improves with increasing free-stream gas temperature and mass loading. The higher fidelity approximation of the DNS-extracted PDF by the (beta)-PDF with increasing Re, gas temperature and mass loading bodes well for turbulent reaction rate modeling.
Structure of Soot-Containing Laminar Jet Diffusion Flames
NASA Technical Reports Server (NTRS)
Mortazavi, S.; Sunderland, P. B.; Jurng, J.; Koylu, U. O.; Faeth, G. M.
1993-01-01
The structure and soot properties of nonbuoyant and weakly-buoyant round jet diffusion flames were studied, considering ethylene, propane and acetylene burning in air at pressures of 0.125-2.0 atm. Measurements of flame structure included radiative heat loss fractions, flame shape and temperature distributions in the fuel-lean (overfire) region. These measurements were used to evaluate flame structure predictions based on the conserved-scalar formalism in conjunction with the laminar flamelet concept, finding good agreement betweem predictions and measurements. Soot property measurements included laminar smoke points, soot volume function distributions using laser extinction, and soot structure using thermophoretic sampling and analysis by transmission electron microscopy. Nonbuoyant flames were found to exhibit laminar smoke points like buoyant flames but their properties are very different; in particular, nonbuoyant flames have laminar smoke point flame lengths and residence times that are shorter and longer, respectively, than buoyant flames.
DNS of High Pressure Supercritical Combustion
NASA Astrophysics Data System (ADS)
Chong, Shao Teng; Raman, Venkatramanan
2016-11-01
Supercritical flows have always been important to rocket motors, and more recently to aircraft engines and stationary gas turbines. The purpose of the present study is to understand effects of differential diffusion on reacting scalars using supercritical isotropic turbulence. Focus is on fuel and oxidant reacting in the transcritical region where density, heat capacity and transport properties are highly sensitive to variations in temperature and pressure. Reynolds and Damkohler number vary as a result and although it is common to neglect differential diffusion effects if Re is sufficiently large, this large variation in temperature with heat release can accentuate molecular transport differences. Direct numerical simulations (DNS) for one step chemistry reaction between fuel and oxidizer are used to examine the differential diffusion effects. A key issue investigated in this paper is if the flamelet progress variable approach, where the Lewis number is usually assumed to be unity and constant for all species, can be accurately applied to simulate supercritical combustion.
Laminar and Turbulent Gaseous Diffusion Flames. Appendix C
NASA Technical Reports Server (NTRS)
Faeth, G. M.; Urban, D. L. (Technical Monitor); Yuan, Z.-G. (Technical Monitor)
2001-01-01
Recent measurements and predictions of the properties of homogeneous (gaseous) laminar and turbulent non-premixed (diffusion) flames are discussed, emphasizing results from both ground- and space-based studies at microgravity conditions. Initial considerations show that effects of buoyancy not only complicate the interpretation of observations of diffusion flames but at times mislead when such results are applied to the non-buoyant diffusion flame conditions of greatest practical interest. This behavior motivates consideration of experiments where effects of buoyancy are minimized; therefore, methods of controlling the intrusion of buoyancy during observations of non-premixed flames are described, considering approaches suitable for both normal laboratory conditions as well as classical microgravity techniques. Studies of laminar flames at low-gravity and microgravity conditions are emphasized in view of the computational tractability of such flames for developing methods of predicting flame structure as well as the relevance of such flames to more practical turbulent flames by exploiting laminar flamelet concepts.
NASA Astrophysics Data System (ADS)
Ghose, Prakash; Patra, Jitendra; Datta, Amitava; Mukhopadhyay, Achintya
2016-05-01
Combustion of kerosene fuel spray has been numerically simulated in a laboratory scale combustor geometry to predict soot and the effects of thermal radiation at different swirl levels of primary air flow. The two-phase motion in the combustor is simulated using an Eulerian-Lagragian formulation considering the stochastic separated flow model. The Favre-averaged governing equations are solved for the gas phase with the turbulent quantities simulated by realisable k-ɛ model. The injection of the fuel is considered through a pressure swirl atomiser and the combustion is simulated by a laminar flamelet model with detailed kinetics of kerosene combustion. Soot formation in the flame is predicted using an empirical model with the model parameters adjusted for kerosene fuel. Contributions of gas phase and soot towards thermal radiation have been considered to predict the incident heat flux on the combustor wall and fuel injector. Swirl in the primary flow significantly influences the flow and flame structures in the combustor. The stronger recirculation at high swirl draws more air into the flame region, reduces the flame length and peak flame temperature and also brings the soot laden zone closer to the inlet plane. As a result, the radiative heat flux on the peripheral wall decreases at high swirl and also shifts closer to the inlet plane. However, increased swirl increases the combustor wall temperature due to radial spreading of the flame. The high incident radiative heat flux and the high surface temperature make the fuel injector a critical item in the combustor. The injector peak temperature increases with the increase in swirl flow mainly because the flame is located closer to the inlet plane. On the other hand, a more uniform temperature distribution in the exhaust gas can be attained at the combustor exit at high swirl condition.
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic-diffusion flames and spray flames. The EUPDF source code will be available with the National Combustion Code (NCC) as a complete package.
The dynamics of turbulent premixed flames: Mechanisms and models for turbulence-flame interaction
NASA Astrophysics Data System (ADS)
Steinberg, Adam M.
The use of turbulent premixed combustion in engines has been garnering renewed interest due to its potential to reduce NOx emissions. However there are many aspects of turbulence-flame interaction that must be better understood before such flames can be accurately modeled. The focus of this dissertation is to develop an improved understanding for the manner in which turbulence interacts with a premixed flame in the 'thin flamelet regime'. To do so, two new diagnostics were developed and employed in a turbulent slot Bunsen flame. These diagnostics, Cinema-Stereoscopic Particle Image Velocimetry and Orthogonal-Plane Cinema-Stereoscopic Particle Image Velocimetry, provided temporally resolved velocity and flame surface measurements in two- and three-dimensions with rates of up to 3 kHz and spatial resolutions as low as 280 mum. Using these measurements, the mechanisms with which turbulence generates flame surface area were studied. It was found that the previous concept that flame stretch is characterized by counter-rotating vortex pairs does not accurately describe real turbulence-flame interactions. Analysis of the experimental data showed that the straining of the flame surface is determined by coherent structures of fluid dynamic strain rate, while the wrinkling is caused by vortical structures. Furthermore, it was shown that the canonical vortex pair configuration is not an accurate reflection of the real interaction geometry. Hence, models developed based on this geometry are unlikely to be accurate. Previous models for the strain rate, curvature stretch rate, and turbulent burning velocity were evaluated. It was found that the previous models did not accurately predict the measured data for a variety of reasons: the assumed interaction geometries did not encompass enough possibilities to describe the possible effects of real turbulence, the turbulence was not properly characterized, and the transport of flame surface area was not always considered. New models therefore were developed that accurately reflect real turbulence-flame interactions and agree with the measured data. These can be implemented in Large Eddy Simulations to provide improved modeling of turbulence-flame interaction.
Influence of thermal radiation on soot production in Laminar axisymmetric diffusion flames
NASA Astrophysics Data System (ADS)
Demarco, R.; Nmira, F.; Consalvi, J. L.
2013-05-01
The aim of this paper is to study the effect of radiative heat transfer on soot production in laminar axisymmetric diffusion flames. Twenty-four C1-C3 hydrocarbon-air flames, consisting of normal (NDF) and inverse (IDF) diffusion flames at both normal gravity (1 g) and microgravity (0 g), and covering a wide range of conditions affecting radiative heat transfer, were simulated. The numerical model is based on the Steady Laminar Flamelet (SLF) model, a semi-empirical two-equation acetylene/benzene based soot model and the Statistical Narrow Band Correlated K (SNBCK) model coupled to the Finite Volume Method (FVM) to compute thermal radiation. Predictions relative to velocity, temperature, soot volume fraction and radiative losses are on the whole in good agreement with the available experimental data. Model results show that, for all the flames considered, thermal radiation is a crucial process with a view to providing accurate predictions for temperatures and soot concentrations. It becomes increasingly significant from IDFs to NDFs and its influence is much greater as gravity is reduced. The radiative contribution of gas prevails in the weakly-sooting IDFs and in the methane and ethane NDFs, whereas soot radiation dominates in the other flames. However, both contributions are significant in all cases, with the exception of the 1 g IDFs investigated where soot radiation can be ignored. The optically-thin approximation (OTA) was also tested and found to be applicable as long as the optical thickness, based on flame radius and Planck mean absorption coefficient, is less than 0.05. The OTA is reasonable for the IDFs and for most of the 1 g NDFs, but it fails to predict the radiative heat transfer for the 0 g NDFs. The accuracy of radiative-property models was then assessed in the latter cases. Simulations show that the gray approximation can be applied to soot but not to combustion gases. Both the non-gray and gray soot versions of the Full Spectrum Correlated k (FSCK) model can be then substituted for the SNBCK with a reduction in CPU time by a factor of about 20 in the latter case.
NASA Astrophysics Data System (ADS)
Swaminathan, N.; Bilger, R. W.
2001-09-01
Characteristics of the scalar dissipation rate, N, of a progress variable, c, based on temperature in turbulent H2-air premixed flames are studied via direct numerical simulation with complex chemical kinetics for a range of flow/flame conditions (Baum et al 1994 J. Fluid Mech. 281 1). The flames are in the usually designated wrinkled-flamelet and well-stirred reactor regimes. The normalized conditional average, Nζ+, is observed to be higher than the corresponding planar laminar value because of strain thinning and the augmentation of laminar transport by turbulence within the flame front. Also, Nζ+ varies strongly across the flame-brush when u'/Sl is high. N has a log-normal distribution when u'/Sl is small and has a long negative tail for cases where u'/Sl is large. In the flame with φ = 0.5, \\widetilde{N_{\\zeta}^ + }/\\widetilde{N_^ + }" shows some sensitivity to Pζ and the sensitivity seems to be weak in a φ = 0.35 flame. The effect of turbulence on <ζ> is observed to be marginal. The conditional diffusion and the conditional dilatation, <∇ · u|ζ>, peak on the unburnt side of the flame-front and are higher than the corresponding laminar flame values in all cases. The inter-relationship among the conditional dissipation, diffusion, dilatation and velocity is discussed. A model for uζ obtained from the conditional dilatation is found not to perform as well as a linear model. The above results are limited, however, because, the flow field is two dimensional, hydrogen is used as the fuel, the range of dynamic length scales is small and the sample size is small.
Suppression of Soot Formation and Shapes of Laminar Jet Diffusion Flames
NASA Technical Reports Server (NTRS)
Xu, F.; Dai, Z.; Faeth, G. M.
2001-01-01
Laminar nonpremixed (diffusion) flames are of interest because they provide model flame systems that are far more tractable for analysis and experiments than practical turbulent flames. In addition, many properties of laminar diffusion flames are directly relevant to turbulent diffusion flames using laminar flamelet concepts. Finally, laminar diffusion flame shapes have been of interest since the classical study of Burke and Schumann because they involve a simple nonintrusive measurement that is convenient for evaluating flame shape predictions. Motivated by these observations, the shapes of round hydrocarbon-fueled laminar jet diffusion flames were considered, emphasizing conditions where effects of buoyancy are small because most practical flames are not buoyant. Earlier studies of shapes of hydrocarbon-fueled nonbuoyant laminar jet diffusion flames considered combustion in still air and have shown that flames at the laminar smoke point are roughly twice as long as corresponding soot-free (blue) flames and have developed simple ways to estimate their shapes. Corresponding studies of hydrocarbon-fueled weakly-buoyant laminar jet diffusion flames in coflowing air have also been reported. These studies were limited to soot-containing flames at laminar smoke point conditions and also developed simple ways to estimate their shapes but the behavior of corresponding soot-free flames has not been addressed. This is unfortunate because ways of selecting flame flow properties to reduce soot concentrations are of great interest; in addition, soot-free flames are fundamentally important because they are much more computationally tractable than corresponding soot-containing flames. Thus, the objectives of the present investigation were to observe the shapes of weakly-buoyant laminar jet diffusion flames at both soot-free and smoke point conditions and to use the results to evaluate simplified flame shape models. The present discussion is brief.
Shapes of Buoyant and Nonbuoyant Methane Laminar Jet Diffusion Flames
NASA Technical Reports Server (NTRS)
Sunderland, Peter B.; Yuan, Zeng-Guang; Urban, David L.
1997-01-01
Laminar gas jet diffusion flames represent a fundamental combustion configuration. Their study has contributed to numerous advances in combustion, including the development of analytical and computational combustion tools. Laminar jet flames are pertinent also to turbulent flames by use of the laminar flamelet concept. Investigations into the shapes of noncoflowing microgravity laminar jet diffusion flames have primarily been pursued in the NASA Lewis 2.2-second drop tower, by Cochran and coworkers and by Bahadori and coworkers. These studies were generally conducted at atmospheric pressure; they involved soot-containing flames and reported luminosity lengths and widths instead of the flame-sheet dimensions which are of Greater value to theory evaluation and development. The seminal model of laminar diffusion flames is that of Burke and Schumann, who solved the conservation of momentum equation for a jet flame in a coflowing ambient by assuming the velocity of fuel, oxidizer and products to be constant throughout. Roper and coworkers improved upon this model by allowing for axial variations of velocity and found flame shape to be independent of coflow velocity. Roper's suggestion that flame height should be independent of gravity level is not supported by past or present observations. Other models have been presented by Klajn and Oppenheim, Markstein and De Ris, Villermaux and Durox, and Li et al. The common result of all these models (except in the buoyant regime) is that flame height is proportional to fuel mass flowrate, with flame width proving much more difficult to predict. Most existing flame models have been compared with shapes of flames containing soot, which is known to obscure the weak blue emission of flame sheets. The present work involves measurements of laminar gas jet diffusion flame shapes. Flame images have been obtained for buoyant and nonbuoyant methane flames burning in quiescent air at various fuel flow-rates, burner diameters and ambient pressures. Soot concentrations were minimized by selecting conditions at low flowrates and low ambient pressures; this allows identification of actual flame sheets associated with blue emissions of CH and CO2. The present modeling effort follows that of Roper and is useful in explaining many of the trends observed.
NASA Astrophysics Data System (ADS)
Jha, Pradeep Kumar
Capturing the effects of detailed-chemistry on turbulent combustion processes is a central challenge faced by the numerical combustion community. However, the inherent complexity and non-linear nature of both turbulence and chemistry require that combustion models rely heavily on engineering approximations to remain computationally tractable. This thesis proposes a computationally efficient algorithm for modelling detailed-chemistry effects in turbulent diffusion flames and numerically predicting the associated flame properties. The cornerstone of this combustion modelling tool is the use of parallel Adaptive Mesh Refinement (AMR) scheme with the recently proposed Flame Prolongation of Intrinsic low-dimensional manifold (FPI) tabulated-chemistry approach for modelling complex chemistry. The effect of turbulence on the mean chemistry is incorporated using a Presumed Conditional Moment (PCM) approach based on a beta-probability density function (PDF). The two-equation k-w turbulence model is used for modelling the effects of the unresolved turbulence on the mean flow field. The finite-rate of methane-air combustion is represented here by using the GRI-Mech 3.0 scheme. This detailed mechanism is used to build the FPI tables. A state of the art numerical scheme based on a parallel block-based solution-adaptive algorithm has been developed to solve the Favre-averaged Navier-Stokes (FANS) and other governing partial-differential equations using a second-order accurate, fully-coupled finite-volume formulation on body-fitted, multi-block, quadrilateral/hexahedral mesh for two-dimensional and three-dimensional flow geometries, respectively. A standard fourth-order Runge-Kutta time-marching scheme is used for time-accurate temporal discretizations. Numerical predictions of three different diffusion flames configurations are considered in the present work: a laminar counter-flow flame; a laminar co-flow diffusion flame; and a Sydney bluff-body turbulent reacting flow. Comparisons are made between the predicted results of the present FPI scheme and Steady Laminar Flamelet Model (SLFM) approach for diffusion flames. The effects of grid resolution on the predicted overall flame solutions are also assessed. Other non-reacting flows have also been considered to further validate other aspects of the numerical scheme. The present schemes predict results which are in good agreement with published experimental results and reduces the computational cost involved in modelling turbulent diffusion flames significantly, both in terms of storage and processing time.
NO{sub x}-abatement potential of lean-premixed GT combustors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattelmayer, T.; Polifke, W.; Winkler, D.
1998-01-01
The influence of the structure of perfectly premixed flames on NO{sub x} formation is investigated theoretically. Since a network of reaction kinetics modules and model flames is used for this purpose, the results obtained are independent of specific burner geometries. Calculations are presented for a mixture temperature of 630 K, an adiabatic flame temperature of 1840 K, and 1 and 15 bars combustor pressure. In particular, the following effects are studied separately from each other: molecular diffusion of temperature and species, flame strain, local quench in highly strained flames and subsequent reignition, turbulent diffusion (no preferential diffusion), and small scalemore » mixing (stirring) in the flame front. Either no relevant influence or an increase in NO{sub x} burners is to avoid excessive turbulent stirring in the flame front. Turbulent flames that exhibit locally and instantaneously near laminar structures (flamelets) appear to be optimal. Using the same methodology, the scope of the investigation is extended to lean-lean staging, since a higher NO{sub x}-abatement potential can be expected in principle. As long as the chemical reactions of the second stage take place in the boundary between the fresh mixture of the second stage and the combustion products from upstream, no advantage can be expected from lean-lean staging. Only if the preliminary burner exhibits much poorer mixing than the second stage can lean-lean staging be beneficial. In contrast, if full mixing between the two stages prior to afterburning can be achieved (lean-mix-lean technique), the combustor outlet temperature can in principle be increased somewhat without NO penalty.« less
NASA Technical Reports Server (NTRS)
Sunderland, P. B.; Lin, K.-C.; Faeth, G. M.
1995-01-01
Soot processes within hydrocarbon fueled flames are important because they affect the durability and performance of propulsion systems, the hazards of unwanted fires, the pollutant and particulate emissions from combustion processes, and the potential for developing computational combustion. Motivated by these observations, the present investigation is studying soot processes in laminar diffusion and premixed flames in order to better understand the soot and thermal radiation emissions of luminous flames. Laminar flames are being studied due to their experimental and computational tractability, noting the relevance of such results to practical turbulent flames through the laminar flamelet concept. Weakly-buoyant and nonbuoyant laminar diffusion flames are being considered because buoyancy affects soot processes in flames while most practical flames involve negligible effects of buoyancy. Thus, low-pressure weakly-buoyant flames are being observed during ground-based experiments while near atmospheric pressure nonbuoyant flames will be observed during space flight experiments at microgravity. Finally, premixed laminar flames also are being considered in order to observe some aspects of soot formation for simpler flame conditions than diffusion flames. The main emphasis of current work has been on measurements of soot nucleation and growth in laminar diffusion and premixed flames.
Turbulent Flame Propagation Characteristics of High Hydrogen Content Fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seitzman, Jerry; Lieuwen, Timothy
2014-09-30
This final report describes the results of an effort to better understand turbulent flame propagation, especially at conditions relevant to gas turbines employing fuels with syngas or hydrogen mixtures. Turbulent flame speeds were measured for a variety of hydrogen/carbon monoxide (H2/CO) and hydrogen/methane (H2/CH4) fuel mixtures with air as the oxidizer. The measurements include global consumption speeds (ST,GC) acquired in a turbulent jet flame at pressures of 1-10 atm and local displacement speeds (ST,LD) acquired in a low-swirl burner at atmospheric pressure. The results verify the importance of fuel composition in determining turbulent flame speeds. For example, different fuel-air mixturesmore » having the same unstretched laminar flame speed (SL,0) but different fuel compositions resulted in significantly different ST,GC for the same turbulence levels (u'). This demonstrates the weakness of turbulent flame speed correlations based simply on u'/SL,0. The results were analyzed using a steady-steady leading points concept to explain the sensitivity of turbulent burning rates to fuel (and oxidizer) composition. Leading point theories suggest that the premixed turbulent flame speed is controlled by the flame front characteristics at the flame brush leading edge, or, in other words, by the flamelets that advance farthest into the unburned mixture (the so-called leading points). For negative Markstein length mixtures, this is assumed to be close to the maximum stretched laminar flame speed (SL,max) for the given fuel-oxidizer mixture. For the ST,GC measurements, the data at a given pressure were well-correlated with an SL,max scaling. However the variation with pressure was not captured, which may be due to non-quasi-steady effects that are not included in the current model. For the ST,LD data, the leading points model again faithfully captured the variation of turbulent flame speed over a wide range of fuel-compositions and turbulence intensities. These results provide evidence that the leading points model can provide useful predictions of turbulent flame speed over a wide range of operating conditions and flow geometries.« less
Three-dimensional Numerical Simulations of Rayleigh-Taylor Unstable Flames in Type Ia Supernovae
NASA Astrophysics Data System (ADS)
Zingale, M.; Woosley, S. E.; Rendleman, C. A.; Day, M. S.; Bell, J. B.
2005-10-01
Flame instabilities play a dominant role in accelerating the burning front to a large fraction of the speed of sound in a Type Ia supernova. We present a three-dimensional numerical simulation of a Rayleigh-Taylor unstable carbon flame, following its evolution through the transition to turbulence. A low-Mach number hydrodynamics method is used, freeing us from the harsh time step restrictions imposed by sound waves. We fully resolve the thermal structure of the flame and its reaction zone, eliminating the need for a flame model. A single density is considered, 1.5×107 g cm-3, and half-carbon, half-oxygen fuel: conditions under which the flame propagated in the flamelet regime in our related two-dimensional study. We compare to a corresponding two-dimensional simulation and show that while fire polishing keeps the small features suppressed in two dimensions, turbulence wrinkles the flame on far smaller scales in the three-dimensional case, suggesting that the transition to the distributed burning regime occurs at higher densities in three dimensions. Detailed turbulence diagnostics are provided. We show that the turbulence follows a Kolmogorov spectrum and is highly anisotropic on the large scales, with a much larger integral scale in the direction of gravity. Furthermore, we demonstrate that it becomes more isotropic as it cascades down to small scales. On the basis of the turbulent statistics and the flame properties of our simulation, we compute the Gibson scale. We show the progress of the turbulent flame through a classic combustion regime diagram, indicating that the flame just enters the distributed burning regime near the end of our simulation.
Large Eddy Simulation of Engineering Flows: A Bill Reynolds Legacy.
NASA Astrophysics Data System (ADS)
Moin, Parviz
2004-11-01
The term, Large eddy simulation, LES, was coined by Bill Reynolds, thirty years ago when he and his colleagues pioneered the introduction of LES in the engineering community. Bill's legacy in LES features his insistence on having a proper mathematical definition of the large scale field independent of the numerical method used, and his vision for using numerical simulation output as data for research in turbulence physics and modeling, just as one would think of using experimental data. However, as an engineer, Bill was pre-dominantly interested in the predictive capability of computational fluid dynamics and in particular LES. In this talk I will present the state of the art in large eddy simulation of complex engineering flows. Most of this technology has been developed in the Department of Energy's ASCI Program at Stanford which was led by Bill in the last years of his distinguished career. At the core of this technology is a fully implicit non-dissipative LES code which uses unstructured grids with arbitrary elements. A hybrid Eulerian/ Largangian approach is used for multi-phase flows, and chemical reactions are introduced through dynamic equations for mixture fraction and reaction progress variable in conjunction with flamelet tables. The predictive capability of LES is demonstrated in several validation studies in flows with complex physics and complex geometry including flow in the combustor of a modern aircraft engine. LES in such a complex application is only possible through efficient utilization of modern parallel super-computers which was recognized and emphasized by Bill from the beginning. The presentation will include a brief mention of computer science efforts for efficient implementation of LES.
NASA Astrophysics Data System (ADS)
Mbagwu, Chukwuka Chijindu
High speed, air-breathing hypersonic vehicles encounter a varied range of engine and operating conditions traveling along cruise/ascent missions at high altitudes and dynamic pressures. Variations of ambient pressure, temperature, Mach number, and dynamic pressure can affect the combustion conditions in conflicting ways. Computations were performed to understand propulsion tradeoffs that occur when a hypersonic vehicle travels along an ascent trajectory. Proper Orthogonal Decomposition methods were applied for the reduction of flamelet chemistry data in an improved combustor model. Two operability limits are set by requirements that combustion efficiency exceed selected minima and flameout be avoided. A method for flameout prediction based on empirical Damkohler number measurements is presented. Operability limits are plotted that define allowable flight corridors on an altitude versus flight Mach number performance map; fixed-acceleration ascent trajectories were considered for this study. Several design rules are also presented for a hypersonic waverider with a dual-mode scramjet engine. Focus is placed on ''vehicle integration" design, differing from previous ''propulsion-oriented" design optimization. The well-designed waverider falls between that of an aircraft (high lift-to-drag ratio) and a rocket (high thrust-to-drag ratio). 84 variations of an X-43-like vehicle were run using the MASIV scramjet reduced order model to examine performance tradeoffs. Informed by the vehicle design study, variable-acceleration trajectory optimization was performed for three constant dynamic pressures ascents. Computed flameout operability limits were implemented as additional constraints to the optimization problem. The Michigan-AFRL Scramjet In-Vehicle (MASIV) waverider model includes finite-rate chemistry, applied scaling laws for 3-D turbulent mixing, ram-scram transition and an empirical value of the flameout Damkohler number. A reduced-order modeling approach is justified (in lieu of higher-fidelity computational simulations) because all vehicle forces are computed multiple thousands of times to generate multi-dimensional performance maps. The findings of this thesis work present a number of compelling conclusions. It is found that the ideal operating conditions of a scramjet engine are heavily dependent on the ambient and combustor pressure (and less strongly on temperature). Combustor pressures of approximately 1.0 bar or greater achieve the highest combustion efficiency, in line with industry standards of more than 0.5 bar. Ascent trajectory analysis of combustion efficiency and lean-limit flameout dictate best operation at higher dynamic pressures and lower altitudes, but these goals are traded off by current structural limitations whereby dynamic pressures must remain below 100 kPa. Hypersonic waverider designs varied between an "airplane" and a "rocket" are found to have better performance with the latter design, with controllability and minimum elevon/rudder surface area as a stability constraint for the vehicle trim. Ultimately, these findings are beneficial and contribute to the overall understanding of dynamically stable waverider vehicles at hypersonic speeds. These types of vehicles have a range of applications from technology demonstration, to earth-to-low orbit payload transit, to most compellingly another step in the development and realization of viable supersonic commercial transport.
Pre-mixed flame simulations for non-unity Lewis numbers
NASA Technical Reports Server (NTRS)
Rutland, C. J.; Trouve, A.
1990-01-01
A principal effect of turbulence on premixed flames in the flamelet region is to wrinkle the flame fronts. For non-unity Lewis numbers (Le), the local flame structure is altered in curved regions. This effect is examined using direct numerical simulations of the three dimensional, constant density, decaying isotropic turbulence with a single step, finite rate chemical reaction. Simulations of Lewis numbers 0.8, 1.0, and 1.2 are compared. The turbulent flame speed, S(sub T), increases as Le decreases. The correlation between S(sub T) and u prime found in previous Le = 1 simulations has a strong Lewis number dependency. The variance of the pdf of the flame curvature increases as Le decreases, indicating that the flames become more wrinkled. A strong correlation between local flame speed and curvature was found. For Le greater than 1, the flame speed increases in regions concave towards the products and decreases in convex regions. The opposite correlation was found for Le less than 1. The mean temperature of the products was also found to vary with Lewis number. For Le = 0.8, it is less than the adiabatic flame temperature and for Le = 1.2 it is greater.
Buoyant Low Stretch Diffusion Flames Beneath Cylindrical PMMA Samples
NASA Technical Reports Server (NTRS)
Olson, S. L.; Tien, J. S.
1999-01-01
A unique new way to study low gravity flames in normal gravity has been developed. To study flame structure and extinction characteristics in low stretch environments, a normal gravity low-stretch diffusion flame is generated using a cylindrical PMMA sample of varying large radii. Burning rates, visible flame thickness, visible flame standoff distance, temperature profiles in the solid and gas, and radiative loss from the system were measured. A transition from the blowoff side of the flammability map to the quenching side of the flammability map is observed at approximately 6-7/ sec, as determined by curvefits to the non-monotonic trends in peak temperatures, solid and gas-phase temperature gradients, and non-dimensional standoff distances. A surface energy balance reveals that the fraction of heat transfer from the flame that is lost to in-depth conduction and surface radiation increases with decreasing stretch until quenching extinction is observed. This is primarily due to decreased heat transfer from the flame, while the magnitude of the losses remains the same. A unique local extinction flamelet phenomena and associated pre-extinction oscillations are observed at very low stretch. An ultimate quenching extinction limit is found at low stretch with sufficiently high induced heat losses.
NASA Technical Reports Server (NTRS)
1995-01-01
The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF approach was extended in our previous work to the study of compressible reacting flows. The application of this method to several supersonic diffusion flames associated with scramjet combustor flow fields provided favorable comparisons with the available experimental data. A further extension of this approach to spray flames, three-dimensional computations, and parallel computing was reported in a recent paper. The recently developed PDF/SPRAY/computational fluid dynamics (CFD) module combines the novelty of the joint composition PDF approach with the ability to run on parallel architectures. This algorithm was implemented on the NASA Lewis Research Center's Cray T3D, a massively parallel computer with an aggregate of 64 processor elements. The calculation procedure was applied to predict the flow properties of both open and confined swirl-stabilized spray flames.
Measurements of turbulent premixed flame dynamics using cinema stereoscopic PIV
NASA Astrophysics Data System (ADS)
Steinberg, Adam M.; Driscoll, James F.; Ceccio, Steven L.
2008-06-01
A new experimental method is described that provides high-speed movies of turbulent premixed flame wrinkling dynamics and the associated vorticity fields. This method employs cinema stereoscopic particle image velocimetry and has been applied to a turbulent slot Bunsen flame. Three-component velocity fields were measured with high temporal and spatial resolutions of 0.9 ms and 140 μm, respectively. The flame-front location was determined using a new multi-step method based on particle image gradients, which is described. Comparisons are made between flame fronts found with this method and simultaneous CH-PLIF images. These show that the flame contour determined corresponds well to the true location of maximum gas density gradient. Time histories of typical eddy-flame interactions are reported and several important phenomena identified. Outwardly rotating eddy pairs wrinkle the flame and are attenuated at they pass through the flamelet. Significant flame-generated vorticity is produced downstream of the wrinkled tip. Similar wrinkles are caused by larger groups of outwardly rotating eddies. Inwardly rotating pairs cause significant convex wrinkles that grow as the flame propagates. These wrinkles encounter other eddies that alter their behavior. The effects of the hydrodynamic and diffusive instabilities are observed and found to be significant contributors to the formation and propagation of wrinkles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, H.J.; Marro, M.A.T.; Smooke, M.
1994-12-31
In general, computation of laminar flame structure involves the simultaneous solution of the conservation equations for mass, energy, momentum, and chemical species. It has been proposed and confirmed in numerous experiments that flame species concentrations can be considered as functions of a conserved scalar (a quantity such as elemental mass fraction, that has no chemical source term). One such conserved scalar is the mixture fraction which is normalized to be zero in the air stream and one in the fuel stream. This allows the species conservation equations to be rewritten as a function of the mixture fraction (itself a conservedmore » scalar) which significantly simplifies the calculation of flame structure. Despite the widespread acceptance that the conserved scalar description of diffusion flame structure has found in the combustion community, there has been surprisingly little effort expended in the development of a detailed evaluation of how well it actually works. In this presentation we compare the results of a {open_quotes}full{close_quotes} transport and chemical calculation performed by Smooke with the predictions of the conserved scalar approach. Our results show that the conserved scalar approach works because some species` concentrations are not dependent only on mixture fraction.« less
Flame stabilization and mixing characteristics in a Stagnation Point Reverse Flow combustor
NASA Astrophysics Data System (ADS)
Bobba, Mohan K.
A novel combustor design, referred to as the Stagnation Point Reverse-Flow (SPRF) combustor, was recently developed that is able to operate stably at very lean fuel-air mixtures and with low NOx emissions even when the fuel and air are not premixed before entering the combustor. The primary objective of this work is to elucidate the underlying physics behind the excellent stability and emissions performance of the SPRF combustor. The approach is to experimentally characterize velocities, species mixing, heat release and flame structure in an atmospheric pressure SPRF combustor with the help of various optical diagnostic techniques: OH PLIF, chemiluminescence imaging, PIV and Spontaneous Raman Scattering. Results indicate that the combustor is primarily stabilized in a region downstream of the injector that is characterized by low average velocities and high turbulence levels; this is also the region where most of the heat release occurs. High turbulence levels in the shear layer lead to increased product entrainment levels, elevating the reaction rates and thereby enhancing the combustor stability. The effect of product entrainment on chemical timescales and the flame structure is illustrated with simple reactor models. Although reactants are found to burn in a highly preheated (1300 K) and turbulent environment due to mixing with hot product gases, the residence times are sufficiently long compared to the ignition timescales such that the reactants do not autoignite. Turbulent flame structure analysis indicates that the flame is primarily in the thin reaction zones regime throughout the combustor, and it tends to become more flamelet like with increasing distance from the injector. Fuel-air mixing measurements in case of non-premixed operation indicate that the fuel is shielded from hot products until it is fully mixed with air, providing nearly premixed performance without the safety issues associated with premixing. The reduction in NOx emissions in the SPRF combustor are primarily due to its ability to stably operate under ultra lean (and nearly premixed) condition within the combustor. Further, to extend the usefulness of this combustor configuration to various applications, combustor geometry scaling rules were developed with the help of simplified coaxial and opposed jet models.
An Experimental Investigation of Premixed Combustion in Extreme Turbulence
NASA Astrophysics Data System (ADS)
Wabel, Timothy Michael
This work has explored various aspects of high Reynolds number combustion that have received much previous speculation. A new high-Reynolds number premixed Bunsen burner, called Hi-Pilot, was designed to produce turbulence intensities in the extreme range of turbulence. The burner was modified several times in order to prevent boundary layer separation in the nozzle, and a large co-flow was designed that was capable of maintaining reactions over the entire flame surface. Velocity and turbulence characteristics were measured using a combination of Laser Doppler Velocimetry (LDV) and Particle Image Velocimetry (PIV). Flame structure was studied using a combination of formaldehyde (CH2O), hydroxyl (OH), and the CH radical. Planar Laser Induced Fluorescence (PLIF). The spatial Overlap of formaldehyde and OH PLIF qualitatively measures the reaction rate between formaldehyde molecules and OH radicals, and is a measure of the reaction layers of the flame. CH PLIF provides an alternative measure of the reaction zone, and was measured to compare with the Overlap PLIF results. Reaction layers are the full-width at half-maximum of the Overlap or CH PLIF signal, and extinction events were defined as regions where the PLIF signal drops below this threshold. Preheat structures were measured using formaldehyde PLIF, and are defined as beginning at 35% of the local maximum PLIF signal, and continue up to the leading edge of the reaction layer. Previous predictions of regime diagram boundaries were tested at the largest values of turbulent Reynolds number to date. The Overlap and CH PLIF diagnostics allowed extensive testing of the predicted broken reaction zones boundary of Peters. Measurements indicated that all run conditions are in the Broadened Preheat - Thin Reaction layers regime, but several conditions are expected to display a broken reaction zone structure. Therefore the work shows that Peters's predicted boundary is not correct, and therefore a Karlovitz number of 100 is not a valid criteria for broken reactions in the Bunsen geometry. Several measures of the turbulent burning velocity, including the global consumption speed and the extent of flamelet wrinkling, were measured at these conditions. Reaction layers for the burning velocity measurements were provided by the OH PLIF. The measurements showed that the global consumption speed continues to increase for all levels of turbulence intensity u'/SL. In contrast, the flame surface wrinkling rapidly increases the flame surface area for u'/SL < 10, but the flame surface area does not increase further at larger turbulence intensities. This indicates that the flame is not in the laminar flamelet regime, and the consumption rate per unit of flame surface area must be increased. The turbulent diffusivity is thought to be the mechanism enhancing the consumption rate, which is a scenario first hypothesized by Damkohler. The flame structure and burning velocity measurements motivated the measurements of the evolution of turbulence through regions of very thick preheat layers. This measurement utilized simultaneous PIV and formaldehyde PLIF in order to obtain conditioned statistics of the turbulence as a function of eta, the distance from the reaction layer. Together, the results tell a consistent story, and deepen our understanding of premixed combustion at large turbulent Reynolds number.
Excitable dynamics in high-Lewis number premixed gas combustion at normal and microgravity
NASA Technical Reports Server (NTRS)
Pearlman, Howard
1995-01-01
Freely-propagating, premixed gas flames in high-Lewis (Le) number, quiescent mixtures are studied experimentally in tubes of various diameter at normal (lg) and microgravity (mu g). A premixture of lean butane and oxygen diluted with helium, argon, neon, nitrogen or a mixture of multiple diluents is examined such that the thermal diffusivity of the mixture (and to a lesser extent, the mass diffusivity of the rate-limiting component) is systematically varied. In effect, different diluents allow variation of the Le without changing the chemistry. The flames are recorded with high speed cinematography and their stability is visually assessed. Different modes of propagation were observed depending on the diameter of the tubes (different conductive heat loss), the composition of the mixture and the g-level. At 1g, four modes of propagation were observed in small and intermediate diameter tubes (large conductive heat loss): (1) steadily propagating flames, (2) radial and longitudinal pulsating flames, (3) 'wavering' flames, and (4) rotating spiral flames. As the diameter of the tube increases, the radial modes become more pronounced while the longitudinal modes systematically disappear. Also, multiple, simultaneous, spatially-separated 'pacemaker' sites are observed in intermediate and large diameter tubes. Each site starts as a small region of high luminosity and develops into a flamelet which assumes the form of one of the fore mentioned modes. These flamelets eventually interact, annihilate each other in their regions of intersection and merge at their newly created free-ends. For very large tubes, radially-propagating wave-trains (believed to be 'trigger waves') are observed. These are analogous to the radial pulsations observed in the smaller diameter tubes. At mu g, three modes of propagation have been observed: (1) steadily propagating flames, (2) radial and longitudinal pulsating flames, and (3) multi-armed, rotating flames. Since the pulsating mode exists at mu g and 1g, buoyant flicker is not the mechanism which drives the pulsations. Moreover, all of the instabilities at 1g and mu g have characteristic frequencies on the O(100Hz). This value is lower than the fundamental, longitudinal acoustic frequencies of the tubes which suggests that the instabilities are not acoustically driven. The patterns formed by this reaction bear remarkable similarities with the patterns formed in most excitable media when the behavior of the system is driven by couplings between chemical reaction and diffusion (e.g., Belousov-Zhabotinsky reaction, Patterns in slime molds, spiral waves in the retina of a bird's eye). While it is recognized that the chemical mechanism associated with this premixed gas reaction is exponentially sensitive to temperature and undoubtedly different from those which govern previously observed excitable media (most are isothermal, or weakly exothermic, liquid phase reactions), similar spatial and temporal patterns should not come as a complete surprise considering heat and mass diffusion are self similar. It is concluded that this premixed gas system is a definitive example of a diffusive-thermal, gas-phase oscillator based on these experimental results and their favorable comparison with theory.
Large eddy simulation of forced ignition of an annular bluff-body burner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subramanian, V.; Domingo, P.; Vervisch, L.
2010-03-15
The optimization of the ignition process is a crucial issue in the design of many combustion systems. Large eddy simulation (LES) of a conical shaped bluff-body turbulent nonpremixed burner has been performed to study the impact of spark location on ignition success. This burner was experimentally investigated by Ahmed et al. [Combust. Flame 151 (2007) 366-385]. The present work focuses on the case without swirl, for which detailed measurements are available. First, cold-flow measurements of velocities and mixture fractions are compared with their LES counterparts, to assess the prediction capabilities of simulations in terms of flow and turbulent mixing. Timemore » histories of velocities and mixture fractions are recorded at selected spots, to probe the resolved probability density function (pdf) of flow variables, in an attempt to reproduce, from the knowledge of LES-resolved instantaneous flow conditions, the experimentally observed reasons for success or failure of spark ignition. A flammability map is also constructed from the resolved mixture fraction pdf and compared with its experimental counterpart. LES of forced ignition is then performed using flamelet fully detailed tabulated chemistry combined with presumed pdfs. Various scenarios of flame kernel development are analyzed and correlated with typical flow conditions observed in this burner. The correlations between, velocities and mixture fraction values at the sparking time and the success or failure of ignition, are then further discussed and analyzed. (author)« less
Experimental Investigation of Premixed Turbulent Hydrocarbon/Air Bunsen Flames
NASA Astrophysics Data System (ADS)
Tamadonfar, Parsa
Through the influence of turbulence, the front of a premixed turbulent flame is subjected to the motions of eddies that leads to an increase in the flame surface area, and the term flame wrinkling is commonly used to describe it. If it is assumed that the flame front would continue to burn locally unaffected by the stretch, then the total turbulent burning velocity is expected to increase proportionally to the increase in the flame surface area caused by wrinkling. When the turbulence intensity is high enough such that the stretch due to hydrodynamics and flame curvature would influence the local premixed laminar burning velocity, then the actual laminar burning velocity (that is, flamelet consumption velocity) should reflect the influence of stretch. To address this issue, obtaining the knowledge of instantaneous flame front structures, flame brush characteristics, and burning velocities of premixed turbulent flames is necessary. Two axisymmetric Bunsen-type burners were used to produce premixed turbulent flames, and three optical measurement techniques were utilized: Particle image velocimetry to measure the turbulence statistics; Rayleigh scattering method to measure the temperature fields of premixed turbulent flames, and Mie scattering method to visualize the flame front contours of premixed turbulent flames. Three hydrocarbons (methane, ethane, and propane) were used as the fuel in the experiments. The turbulence was generated using different perforated plates mounted upstream of the burner exit. A series of comprehensive parameters including the thermal flame front thickness, characteristic flame height, mean flame brush thickness, mean volume of the turbulent flame region, two-dimensional flame front curvature, local flame front angle, two-dimensional flame surface density, wrinkled flame surface area, turbulent burning velocity, mean flamelet consumption velocity, mean turbulent flame stretch factor, mean turbulent Markstein length and number, and mean fuel consumption rate were systematically evaluated from the experimental data. The normalized preheat zone and reaction zone thicknesses decreased with increasing non-dimensional turbulence intensity in ultra-lean premixed turbulent flames under a constant equivalence ratio of 0.6, whereas they increased with increasing equivalence ratios from 0.6 to 1.0 under a constant bulk flow velocity. The normalized preheat zone and reaction zone thicknesses showed no overall trend with increasing non-dimensional longitudinal integral length scale. The normalized preheat zone and reaction zone thicknesses decreased by increasing the Karlovitz number, suggesting that increasing the total stretch rate is the controlling mechanism in the reduction of flame front thickness for the experimental conditions studied in this thesis. In general, the leading edge and half-burning surface turbulent burning velocities were enhanced with increasing equivalence ratio from lean to stoichiometric mixtures, whereas they decreased with increasing equivalence ratio for rich mixtures. These velocities were enhanced with increasing total turbulence intensity. The leading edge and half-burning surface turbulent burning velocities for lean/stoichiometric mixtures were observed to be smaller than that for rich mixtures. The mean turbulent flame stretch factor displayed a dependence on the equivalence ratio and turbulence intensity. Results show that the mean turbulent flame stretch factors for lean/stoichiometric and rich mixtures were not equal when the unstrained premixed laminar burning velocity, non-dimensional bulk flow velocity, non-dimensional turbulence intensity, and non-dimensional longitudinal integral length scale were kept constant.
Flame imaging using planar laser induced fluorescence of sulfur dioxide
NASA Astrophysics Data System (ADS)
Honza, Rene; Ding, Carl-Philipp; Dreizler, Andreas; Böhm, Benjamin
2017-09-01
Laser induced fluorescence of sulfur dioxide (SO2-PLIF) has been demonstrated as a useful tool for flame imaging. Advantage was taken from the strong temperature dependence of the SO2 fluorescence signal. SO2 fluorescence intensity increases by more than one order of magnitude if the temperature changes from ambient conditions to adiabatic flame temperatures of stoichiometric methane-air flames. This results in a steep gradient of SO2-PLIF intensities at the reaction zone and therefore can be used as a reliable flame marker. SO2 can be excited electronically using the fourth-harmonic of an Nd:YAG laser at 266 nm. This is an attractive alternative to OH-LIF, a well-recognized flame front marker, because no frequency-doubled dye lasers are needed. This simplifies the experimental setup and is advantageous for measurements at high repetition rates where dye bleaching can become an issue. To prove the performance of this approach, SO2-PLIF measurements were performed simultaneously with OH-PLIF on laminar premixed methane-air Bunsen flames for equivalence ratios between 0.9 and 1.25. These measurements were compared to 1D laminar flamelet simulations. The SO2 fluorescence signal was found to follow the temperature rise of the flame and is located closer to the steep temperature gradient than OH. Finally, the combined SO2- and OH-PLIF setup was applied to a spark ignition IC-engine to visualize the development of the early flame kernel.
Sankaran, Ramanan; Hawkes, Evatt R.; Yoo, Chun Sang; ...
2015-06-22
Direct numerical simulations of three-dimensional spatially-developing turbulent Bunsen flames were performed at three different turbulence intensities. We performed these simulations using a reduced methane–air chemical mechanism which was specifically tailored for the lean premixed conditions simulated here. A planar-jet turbulent Bunsen flame configuration was used in which turbulent preheated methane–air mixture at 0.7 equivalence ratio issued through a central jet and was surrounded by a hot laminar coflow of burned products. The turbulence characteristics at the jet inflow were selected such that combustion occured in the thin reaction zones (TRZ) regime. At the lowest turbulence intensity, the conditions fall onmore » the boundary between the TRZ regime and the corrugated flamelet regime, and progressively moved further into the TRZ regime by increasing the turbulent intensity. The data from the three simulations was analyzed to understand the effect of turbulent stirring on the flame structure and thickness. Furthermore, statistical analysis of the data showed that the thermal preheat layer of the flame was thickened due to the action of turbulence, but the reaction zone was not significantly affected. A global and local analysis of the burning velocity of the flame was performed to compare the different flames. Detailed statistical averages of the flame speed were also obtained to study the spatial dependence of displacement speed and its correlation to strain rate and curvature.« less
The scaling of performance and losses in miniature internal combustion engines
NASA Astrophysics Data System (ADS)
Menon, Shyam Kumar
Miniature glow ignition internal combustion (IC) piston engines are an off--the--shelf technology that could dramatically increase the endurance of miniature electric power supplies and the range and endurance of small unmanned air vehicles provided their overall thermodynamic efficiencies can be increased to 15% or better. This thesis presents the first comprehensive analysis of small (<500 g) piston engine performance. A unique dynamometer system is developed that is capable of making reliable measurements of engine performance and losses in these small engines. Methodologies are also developed for measuring volumetric, heat transfer, exhaust, mechanical, and combustion losses. These instruments and techniques are used to investigate the performance of seven single-cylinder, two-stroke, glow fueled engines ranging in size from 15 to 450 g (0.16 to 7.5 cm3 displacement). Scaling rules for power output, overall efficiency, and normalized power are developed from the data. These will be useful to developers of micro-air vehicles and miniature power systems. The data show that the minimum length scale of a thermodynamically viable piston engine based on present technology is approximately 3 mm. Incomplete combustion is the most important challenge as it accounts for 60-70% of total energy losses. Combustion losses are followed in order of importance by heat transfer, sensible enthalpy, and friction. A net heat release analysis based on in-cylinder pressure measurements suggest that a two--stage combustion process occurs at low engine speeds and equivalence ratios close to 1. Different theories based on burning mode and reaction kinetics are proposed to explain the observed results. High speed imaging of the combustion chamber suggests that a turbulent premixed flame with its origin in the vicinity of the glow plug is the primary driver of combustion. Placing miniature IC engines on a turbulent combustion regime diagram shows that they operate in the 'flamelet in eddy' regime whereas conventional--scale engines operate mostly in the 'wrinkled laminar flame sheet' regime. Taken together, the results show that the combustion process is the key obstacle to realizing the potential of small IC engines. Overcoming this obstacle will require new diagnostic techniques, measurements, combustion models, and high temperature materials.
On the Structure of Premixed Flames Subjected to Extreme Levels of Turbulence
NASA Astrophysics Data System (ADS)
Skiba, Aaron William
Developing next-generation propulsion and energy production devices that are efficient, cost-effective, and generate little to no harmful emissions will require highly-accurate, robust, yet computationally tractable turbulent combustion models. Models that accurately simulate turbulent premixed combustion problems are particularly important due to the fact that burning in a premixed mode can reduce exhaust emissions. A common tool employed to identify when a particular model might be more appropriate than others is the theoretical Borghi Diagram, which possesses boundaries that are meant to separate various regimes of combustion (i.e. where a particular model is superior to others). However, the derivations of these boundaries are merely based upon intuition and dimensional reasoning, rather than experimental evidence. This thesis aims to provide such evidence; furthermore, it proposes novel approaches to delineating regimes of combustion that are consistent with experimental results. To this end, high-fidelity flame structure measurements were applied to premixed methane-air Bunsen flames subjected to extreme levels of turbulence. Specifically, 28 cases were studied with turbulence levels (u'/S L) as high as 246, longitudinal integral length scales ( Lx) as large as 43 mm, and turbulent Karlovitz ( KaT) and Reynolds (ReT) numbers up to 533 and 99,000, respectively. Two techniques were employed to measure the preheat and reaction layer thicknesses of these flames. One consisted of planar laser-induced fluorescence (PLIF) imaging of CH radicals, while the other involved taking the product of simultaneously acquired PLIF images of formaldehyde (CH2O) and hydroxyl (OH) to produce "overlap-layers." Average preheat layer thicknesses are found to increase with increasing u'/SL and with axial distance from the burner (x/D). In contrast, average reaction layer thicknesses did not vary appreciably with either u'/SL or x/D. The reaction layers are also observed to remain continuous; that is, local extinction events are rarely observed. The results of this study, as well as those from prior investigations, display inconsistencies with predictions made by the theoretical Borghi Diagram. Therefore, a new Measured Regime Diagram is proposed wherein the Klimov-Williams criterion is replaced by a metric that relates the turbulent diffusivity (D T = u'L) to the molecular diffusivity within the preheat layer (D* = SLdeltaFL). Specifically, the line defined by DT/D* ≈ 180 does a substantially better job of separating thin flamelets from those with broadened preheat yet thin reaction layers (i.e. BP-TR flames). Additionally, the results suggest that the BP-TR regime extends well beyond what was previously theorized since neither broken nor broadened reaction layers were observed under conditions with Karlovitz numbers as high as 533. Overall, these efforts provide tremendous insights into the fundamental properties of extremely turbulent premixed flames. Ultimately, these insights will assist with the development and proper selection of accurate and robust numerical models.
Does rational selection of training and test sets improve the outcome of QSAR modeling?
Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander
2012-10-22
Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.
Structure and Soot Formation Properties of Laminar Flames
NASA Technical Reports Server (NTRS)
El-Leathy, A. M.; Xu, F.; Faeth, G. M.
2001-01-01
Soot formation within hydrocarbon-fueled flames is an important unresolved problem of combustion science for several reasons: soot emissions are responsible for more deaths than any other combustion-generated pollutant, thermal loads due to continuum radiation from soot limit the durability of combustors, thermal radiation from soot is mainly responsible for the growth and spread of unwanted fires, carbon monoxide emissions associated with soot emissions are responsible for most fire deaths, and limited understanding of soot processes in flames is a major impediment to the development of computational combustion. Motivated by these observations, soot processes within laminar premixed and nonpremixed (diffusion) flames are being studied during this investigation. The study is limited to laminar flames due to their experimental and computational tractability, noting the relevance of these results to practical flames through laminar flamelet concepts. Nonbuoyant flames are emphasized because buoyancy affects soot processes in laminar diffusion flames whereas effects of buoyancy are small for most practical flames. This study involves both ground- and space-based experiments, however, the following discussion will be limited to ground-based experiments because no space-based experiments were carried out during the report period. The objective of this work was to complete measurements in both premixed and nonpremixed flames in order to gain a better understanding of the structure of the soot-containing region and processes of soot nucleation and surface growth in these environments, with the latter information to be used to develop reliable ways of predicting soot properties in practical flames. The present discussion is brief, more details about the portions of the investigation considered here can be found in refs. 8-13.
Joe H. Scott; Robert E. Burgan
2005-01-01
This report describes a new set of standard fire behavior fuel models for use with Rothermel's surface fire spread model and the relationship of the new set to the original set of 13 fire behavior fuel models. To assist with transition to using the new fuel models, a fuel model selection guide, fuel model crosswalk, and set of fuel model photos are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zili; Nordhaus, William
2009-03-19
In the duration of this project, we finished the main tasks set up in the initial proposal. These tasks include: setting up the basic platform in GAMS language for the new RICE 2007 model; testing various model structure of RICE 2007; incorporating PPP data set in the new RICE model; developing gridded data set for IA modeling.
Improving a Lecture-Size Molecular Model Set by Repurposing Used Whiteboard Markers
ERIC Educational Resources Information Center
Dragojlovic, Veljko
2015-01-01
Preparation of an inexpensive model set from whiteboard markers and either HGS molecular model set or atoms made of wood is described. The model set is relatively easy to prepare and is sufficiently large to be suitable as an instructor set for use in lectures.
Large eddy simulation of soot evolution in an aircraft combustor
NASA Astrophysics Data System (ADS)
Mueller, Michael E.; Pitsch, Heinz
2013-11-01
An integrated kinetics-based Large Eddy Simulation (LES) approach for soot evolution in turbulent reacting flows is applied to the simulation of a Pratt & Whitney aircraft gas turbine combustor, and the results are analyzed to provide insights into the complex interactions of the hydrodynamics, mixing, chemistry, and soot. The integrated approach includes detailed models for soot, combustion, and the unresolved interactions between soot, chemistry, and turbulence. The soot model is based on the Hybrid Method of Moments and detailed descriptions of soot aggregates and the various physical and chemical processes governing their evolution. The detailed kinetics of jet fuel oxidation and soot precursor formation is described with the Radiation Flamelet/Progress Variable model, which has been modified to account for the removal of soot precursors from the gas-phase. The unclosed filtered quantities in the soot and combustion models, such as source terms, are closed with a novel presumed subfilter PDF approach that accounts for the high subfilter spatial intermittency of soot. For the combustor simulation, the integrated approach is combined with a Lagrangian parcel method for the liquid spray and state-of-the-art unstructured LES technology for complex geometries. Two overall fuel-to-air ratios are simulated to evaluate the ability of the model to make not only absolute predictions but also quantitative predictions of trends. The Pratt & Whitney combustor is a Rich-Quench-Lean combustor in which combustion first occurs in a fuel-rich primary zone characterized by a large recirculation zone. Dilution air is then added downstream of the recirculation zone, and combustion continues in a fuel-lean secondary zone. The simulations show that large quantities of soot are formed in the fuel-rich recirculation zone, and, furthermore, the overall fuel-to-air ratio dictates both the dominant soot growth process and the location of maximum soot volume fraction. At the higher fuel-to-air ratio, the maximum soot volume fraction is found inside the recirculation zone; at the lower fuel-to-air ratio, turbulent fluctuations in the mixture fraction promote the oxidation of soot inside the recirculation zone and suppress the accumulation of a large soot volume fraction. Downstream, soot exits the combustor in intermittent fuel-rich pockets that are not mixed during the injection of dilution air and subsequent secondary fuel-lean combustion. At the higher fuel-to-air ratio, the frequency of these fuel-rich pockets is increased, leading to higher soot emissions from the combustor. Quantitatively, the soot emissions from the combustor are overpredicted by about 50%, which is a substantial improvement over previous works utilizing RANS to predict such emissions. In addition, the ratio between the two fuel-to-air ratios predicted by LES compares very favorably with the experimental measurements. Furthermore, soot growth is dominated by an acetylene-based pathway rather than an aromatic-based pathway, which is usually the dominant mechanism in nonpremixed flames. This finding is the result of the interactions between the hydrodynamics, mixing, chemistry, and soot in the recirculation zone and the resulting residence times of soot at various mixture fractions (compositions), which are not the same in this complex recirculating flow as in nonpremixed jet flames.
A new region-edge based level set model with applications to image segmentation
NASA Astrophysics Data System (ADS)
Zhi, Xuhao; Shen, Hong-Bin
2018-04-01
Level set model has advantages in handling complex shapes and topological changes, and is widely used in image processing tasks. The image segmentation oriented level set models can be grouped into region-based models and edge-based models, both of which have merits and drawbacks. Region-based level set model relies on fitting to color intensity of separated regions, but is not sensitive to edge information. Edge-based level set model evolves by fitting to local gradient information, but can get easily affected by noise. We propose a region-edge based level set model, which considers saliency information into energy function and fuses color intensity with local gradient information. The evolution of the proposed model is implemented by a hierarchical two-stage protocol, and the experimental results show flexible initialization, robust evolution and precise segmentation.
Modelling uncertainty with generalized credal sets: application to conjunction and decision
NASA Astrophysics Data System (ADS)
Bronevich, Andrey G.; Rozenberg, Igor N.
2018-01-01
To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.
Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model
NASA Astrophysics Data System (ADS)
Kuznetsov, A. V.; Makaryants, G. M.
2018-01-01
There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.
A computational approach to compare regression modelling strategies in prediction research.
Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H
2016-08-25
It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.
ERIC Educational Resources Information Center
Huitzing, Hiddo A.
2004-01-01
This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…
Ashrafi, Parivash; Sun, Yi; Davey, Neil; Adams, Roderick G; Wilkinson, Simon C; Moss, Gary Patrick
2018-03-01
The aim of this study was to investigate how to improve predictions from Gaussian Process models by optimising the model hyperparameters. Optimisation methods, including Grid Search, Conjugate Gradient, Random Search, Evolutionary Algorithm and Hyper-prior, were evaluated and applied to previously published data. Data sets were also altered in a structured manner to reduce their size, which retained the range, or 'chemical space' of the key descriptors to assess the effect of the data range on model quality. The Hyper-prior Smoothbox kernel results in the best models for the majority of data sets, and they exhibited significantly better performance than benchmark quantitative structure-permeability relationship (QSPR) models. When the data sets were systematically reduced in size, the different optimisation methods generally retained their statistical quality, whereas benchmark QSPR models performed poorly. The design of the data set, and possibly also the approach to validation of the model, is critical in the development of improved models. The size of the data set, if carefully controlled, was not generally a significant factor for these models and that models of excellent statistical quality could be produced from substantially smaller data sets. © 2018 Royal Pharmaceutical Society.
Density-based cluster algorithms for the identification of core sets
NASA Astrophysics Data System (ADS)
Lemke, Oliver; Keller, Bettina G.
2016-10-01
The core-set approach is a discretization method for Markov state models of complex molecular dynamics. Core sets are disjoint metastable regions in the conformational space, which need to be known prior to the construction of the core-set model. We propose to use density-based cluster algorithms to identify the cores. We compare three different density-based cluster algorithms: the CNN, the DBSCAN, and the Jarvis-Patrick algorithm. While the core-set models based on the CNN and DBSCAN clustering are well-converged, constructing core-set models based on the Jarvis-Patrick clustering cannot be recommended. In a well-converged core-set model, the number of core sets is up to an order of magnitude smaller than the number of states in a conventional Markov state model with comparable approximation error. Moreover, using the density-based clustering one can extend the core-set method to systems which are not strongly metastable. This is important for the practical application of the core-set method because most biologically interesting systems are only marginally metastable. The key point is to perform a hierarchical density-based clustering while monitoring the structure of the metric matrix which appears in the core-set method. We test this approach on a molecular-dynamics simulation of a highly flexible 14-residue peptide. The resulting core-set models have a high spatial resolution and can distinguish between conformationally similar yet chemically different structures, such as register-shifted hairpin structures.
Raymond, G M; Bassingthwaighte, J B
This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a "consilience" of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (K m = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave K m = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated K m = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model improves confidence in the results. This modeling effort is also an example of reproducible science available at html://www.physiome.org/jsim/models/webmodel/NSR/SalicylicAcidClearance.
Consensus Modeling of Oral Rat Acute Toxicity
An acute toxicity dataset (oral rat LD50) with about 7400 compounds was compiled from the ChemIDplus database. This dataset was divided into a modeling set and a prediction set. The compounds in the prediction set were selected so that they were present in the modeling set used...
NASA Astrophysics Data System (ADS)
Wang, G.; Mayes, M. A.
2017-12-01
Microbially-explicit soil organic matter (SOM) decomposition models are thought to be more biologically realistic than conventional models. Current testing or evaluation of microbial models majorly uses steady-state analysis with time-invariant forces (i.e., soil temperature, moisture and litter input). The findings from such simplified analyses are assumed to be capable of representing the model responses in field soil conditions with seasonal driving forces. Here we show that the steady-state modeling results with seasonal forces may result in distinct findings from the simulations with time-invariant forcing data. We evaluate the response of soil organic C (SOC) to litter addition (L+) in a subtropical pine forest using the calibrated Microbial-ENzyme Decomposition (MEND) model. We implemented two sets of modeling analyses, with each set including two scenarios, i.e., control (CR) vs. litter-addition (L+). The first set (Set1) uses fixed soil temperature and moisture, and constant litter input under Scenario CR vs. increased constant litter input under Scenario L+. The second set (Set2) employs hourly soil temperature and moisture and monthly litter input under Scenario CR. Under Scenario L+ of Set2, A logistic function with an upper plateau represents the increasing trend of litter input to SOM. We conduct long-term simulations to ensure that the models reach steady-states for Set1 or dynamic equilibrium for Set2. Litter addition of Set2 causes an increase of SOC by 29%. However, the steady-state SOC pool sizes of Set1 would not respond to L+ as long as the chemical composition of litter remained the same. Our results indicate the necessity to implement dynamic model simulations with seasonal forcing data, which could lead to modeling results qualitatively different from the steady-state analysis with time-invariant forcing data.
Votano, Joseph R; Parham, Marc; Hall, L Mark; Hall, Lowell H; Kier, Lemont B; Oloff, Scott; Tropsha, Alexander
2006-11-30
Four modeling techniques, using topological descriptors to represent molecular structure, were employed to produce models of human serum protein binding (% bound) on a data set of 1008 experimental values, carefully screened from publicly available sources. To our knowledge, this data is the largest set on human serum protein binding reported for QSAR modeling. The data was partitioned into a training set of 808 compounds and an external validation test set of 200 compounds. Partitioning was accomplished by clustering the compounds in a structure descriptor space so that random sampling of 20% of the whole data set produced an external test set that is a good representative of the training set with respect to both structure and protein binding values. The four modeling techniques include multiple linear regression (MLR), artificial neural networks (ANN), k-nearest neighbors (kNN), and support vector machines (SVM). With the exception of the MLR model, the ANN, kNN, and SVM QSARs were ensemble models. Training set correlation coefficients and mean absolute error ranged from r2=0.90 and MAE=7.6 for ANN to r2=0.61 and MAE=16.2 for MLR. Prediction results from the validation set yielded correlation coefficients and mean absolute errors which ranged from r2=0.70 and MAE=14.1 for ANN to a low of r2=0.59 and MAE=18.3 for the SVM model. Structure descriptors that contribute significantly to the models are discussed and compared with those found in other published models. For the ANN model, structure descriptor trends with respect to their affects on predicted protein binding can assist the chemist in structure modification during the drug design process.
Economic communication model set
NASA Astrophysics Data System (ADS)
Zvereva, Olga M.; Berg, Dmitry B.
2017-06-01
This paper details findings from the research work targeted at economic communications investigation with agent-based models usage. The agent-based model set was engineered to simulate economic communications. Money in the form of internal and external currencies was introduced into the models to support exchanges in communications. Every model, being based on the general concept, has its own peculiarities in algorithm and input data set since it was engineered to solve the specific problem. Several and different origin data sets were used in experiments: theoretic sets were estimated on the basis of static Leontief's equilibrium equation and the real set was constructed on the basis of statistical data. While simulation experiments, communication process was observed in dynamics, and system macroparameters were estimated. This research approved that combination of an agent-based and mathematical model can cause a synergetic effect.
Complex fuzzy soft expert sets
NASA Astrophysics Data System (ADS)
Selvachandran, Ganeshsree; Hafeed, Nisren A.; Salleh, Abdul Razak
2017-04-01
Complex fuzzy sets and its accompanying theory although at its infancy, has proven to be superior to classical type-1 fuzzy sets, due its ability in representing time-periodic problem parameters and capturing the seasonality of the fuzziness that exists in the elements of a set. These are important characteristics that are pervasive in most real world problems. However, there are two major problems that are inherent in complex fuzzy sets: it lacks a sufficient parameterization tool and it does not have a mechanism to validate the values assigned to the membership functions of the elements in a set. To overcome these problems, we propose the notion of complex fuzzy soft expert sets which is a hybrid model of complex fuzzy sets and soft expert sets. This model incorporates the advantages of complex fuzzy sets and soft sets, besides having the added advantage of allowing the users to know the opinion of all the experts in a single model without the need for any additional cumbersome operations. As such, this model effectively improves the accuracy of representation of problem parameters that are periodic in nature, besides having a higher level of computational efficiency compared to similar models in literature.
Protein Models Docking Benchmark 2
Anishchenko, Ivan; Kundrotas, Petras J.; Tuzikov, Alexander V.; Vakser, Ilya A.
2015-01-01
Structural characterization of protein-protein interactions is essential for our ability to understand life processes. However, only a fraction of known proteins have experimentally determined structures. Such structures provide templates for modeling of a large part of the proteome, where individual proteins can be docked by template-free or template-based techniques. Still, the sensitivity of the docking methods to the inherent inaccuracies of protein models, as opposed to the experimentally determined high-resolution structures, remains largely untested, primarily due to the absence of appropriate benchmark set(s). Structures in such a set should have pre-defined inaccuracy levels and, at the same time, resemble actual protein models in terms of structural motifs/packing. The set should also be large enough to ensure statistical reliability of the benchmarking results. We present a major update of the previously developed benchmark set of protein models. For each interactor, six models were generated with the model-to-native Cα RMSD in the 1 to 6 Å range. The models in the set were generated by a new approach, which corresponds to the actual modeling of new protein structures in the “real case scenario,” as opposed to the previous set, where a significant number of structures were model-like only. In addition, the larger number of complexes (165 vs. 63 in the previous set) increases the statistical reliability of the benchmarking. We estimated the highest accuracy of the predicted complexes (according to CAPRI criteria), which can be attained using the benchmark structures. The set is available at http://dockground.bioinformatics.ku.edu. PMID:25712716
2013-01-01
Background While a large body of work exists on comparing and benchmarking descriptors of molecular structures, a similar comparison of protein descriptor sets is lacking. Hence, in the current work a total of 13 amino acid descriptor sets have been benchmarked with respect to their ability of establishing bioactivity models. The descriptor sets included in the study are Z-scales (3 variants), VHSE, T-scales, ST-scales, MS-WHIM, FASGAI, BLOSUM, a novel protein descriptor set (termed ProtFP (4 variants)), and in addition we created and benchmarked three pairs of descriptor combinations. Prediction performance was evaluated in seven structure-activity benchmarks which comprise Angiotensin Converting Enzyme (ACE) dipeptidic inhibitor data, and three proteochemometric data sets, namely (1) GPCR ligands modeled against a GPCR panel, (2) enzyme inhibitors (NNRTIs) with associated bioactivities against a set of HIV enzyme mutants, and (3) enzyme inhibitors (PIs) with associated bioactivities on a large set of HIV enzyme mutants. Results The amino acid descriptor sets compared here show similar performance (<0.1 log units RMSE difference and <0.1 difference in MCC), while errors for individual proteins were in some cases found to be larger than those resulting from descriptor set differences ( > 0.3 log units RMSE difference and >0.7 difference in MCC). Combining different descriptor sets generally leads to better modeling performance than utilizing individual sets. The best performers were Z-scales (3) combined with ProtFP (Feature), or Z-Scales (3) combined with an average Z-Scale value for each target, while ProtFP (PCA8), ST-Scales, and ProtFP (Feature) rank last. Conclusions While amino acid descriptor sets capture different aspects of amino acids their ability to be used for bioactivity modeling is still – on average – surprisingly similar. Still, combining sets describing complementary information consistently leads to small but consistent improvement in modeling performance (average MCC 0.01 better, average RMSE 0.01 log units lower). Finally, performance differences exist between the targets compared thereby underlining that choosing an appropriate descriptor set is of fundamental for bioactivity modeling, both from the ligand- as well as the protein side. PMID:24059743
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.
Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2017-06-30
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do
2017-01-01
Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.
Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin
2015-02-01
To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.
Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach
Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin
2014-01-01
Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456
Kuswandi, Bambang; Putri, Fitra Karima; Gani, Agus Abdul; Ahmad, Musa
2015-12-01
The use of chemometrics to analyse infrared spectra to predict pork adulteration in the beef jerky (dendeng) was explored. In the first step, the analysis of pork in the beef jerky formulation was conducted by blending the beef jerky with pork at 5-80 % levels. Then, they were powdered and classified into training set and test set. The second step, the spectra of the two sets was recorded by Fourier Transform Infrared (FTIR) spectroscopy using atenuated total reflection (ATR) cell on the basis of spectral data at frequency region 4000-700 cm(-1). The spectra was categorised into four data sets, i.e. (a) spectra in the whole region as data set 1; (b) spectra in the fingerprint region (1500-600 cm(-1)) as data set 2; (c) spectra in the whole region with treatment as data set 3; and (d) spectra in the fingerprint region with treatment as data set 4. The third step, the chemometric analysis were employed using three class-modelling techniques (i.e. LDA, SIMCA, and SVM) toward the data sets. Finally, the best result of the models towards the data sets on the adulteration analysis of the samples were selected and the best model was compared with the ELISA method. From the chemometric results, the LDA model on the data set 1 was found to be the best model, since it could classify and predict 100 % accuracy of the sample tested. The LDA model was applied toward the real samples of the beef jerky marketed in Jember, and the results showed that the LDA model developed was in good agreement with the ELISA method.
Archaeological predictive model set.
DOT National Transportation Integrated Search
2015-03-01
This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...
Quantitative structure-activity relationship modeling of rat acute toxicity by oral exposure.
Zhu, Hao; Martin, Todd M; Ye, Lin; Sedykh, Alexander; Young, Douglas M; Tropsha, Alexander
2009-12-01
Few quantitative structure-activity relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity end points. In this study, a comprehensive data set of 7385 compounds with their most conservative lethal dose (LD(50)) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire data set was selected that included all 3472 compounds used in TOPKAT's training set. The remaining 3913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R(2) of linear regression between actual and predicted LD(50) values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R(2) ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD(50) for every compound using all five models. The consensus models afforded higher prediction accuracy for the external validation data set with the higher coverage as compared to individual constituent models. The validated consensus LD(50) models developed in this study can be used as reliable computational predictors of in vivo acute toxicity.
Rank Order Entropy: why one metric is not enough
McLellan, Margaret R.; Ryan, M. Dominic; Breneman, Curt M.
2011-01-01
The use of Quantitative Structure-Activity Relationship models to address problems in drug discovery has a mixed history, generally resulting from the mis-application of QSAR models that were either poorly constructed or used outside of their domains of applicability. This situation has motivated the development of a variety of model performance metrics (r2, PRESS r2, F-tests, etc) designed to increase user confidence in the validity of QSAR predictions. In a typical workflow scenario, QSAR models are created and validated on training sets of molecules using metrics such as Leave-One-Out or many-fold cross-validation methods that attempt to assess their internal consistency. However, few current validation methods are designed to directly address the stability of QSAR predictions in response to changes in the information content of the training set. Since the main purpose of QSAR is to quickly and accurately estimate a property of interest for an untested set of molecules, it makes sense to have a means at hand to correctly set user expectations of model performance. In fact, the numerical value of a molecular prediction is often less important to the end user than knowing the rank order of that set of molecules according to their predicted endpoint values. Consequently, a means for characterizing the stability of predicted rank order is an important component of predictive QSAR. Unfortunately, none of the many validation metrics currently available directly measure the stability of rank order prediction, making the development of an additional metric that can quantify model stability a high priority. To address this need, this work examines the stabilities of QSAR rank order models created from representative data sets, descriptor sets, and modeling methods that were then assessed using Kendall Tau as a rank order metric, upon which the Shannon Entropy was evaluated as a means of quantifying rank-order stability. Random removal of data from the training set, also known as Data Truncation Analysis (DTA), was used as a means for systematically reducing the information content of each training set while examining both rank order performance and rank order stability in the face of training set data loss. The premise for DTA ROE model evaluation is that the response of a model to incremental loss of training information will be indicative of the quality and sufficiency of its training set, learning method, and descriptor types to cover a particular domain of applicability. This process is termed a “rank order entropy” evaluation, or ROE. By analogy with information theory, an unstable rank order model displays a high level of implicit entropy, while a QSAR rank order model which remains nearly unchanged during training set reductions would show low entropy. In this work, the ROE metric was applied to 71 data sets of different sizes, and was found to reveal more information about the behavior of the models than traditional metrics alone. Stable, or consistently performing models, did not necessarily predict rank order well. Models that performed well in rank order did not necessarily perform well in traditional metrics. In the end, it was shown that ROE metrics suggested that some QSAR models that are typically used should be discarded. ROE evaluation helps to discern which combinations of data set, descriptor set, and modeling methods lead to usable models in prioritization schemes, and provides confidence in the use of a particular model within a specific domain of applicability. PMID:21875058
Zhan, Xue-yan; Zhao, Na; Lin, Zhao-zhou; Wu, Zhi-sheng; Yuan, Rui-juan; Qiao, Yan-jiang
2014-12-01
The appropriate algorithm for calibration set selection was one of the key technologies for a good NIR quantitative model. There are different algorithms for calibration set selection, such as Random Sampling (RS) algorithm, Conventional Selection (CS) algorithm, Kennard-Stone(KS) algorithm and Sample set Portioning based on joint x-y distance (SPXY) algorithm, et al. However, there lack systematic comparisons between two algorithms of the above algorithms. The NIR quantitative models to determine the asiaticoside content in Centella total glucosides were established in the present paper, of which 7 indexes were classified and selected, and the effects of CS algorithm, KS algorithm and SPXY algorithm for calibration set selection on the accuracy and robustness of NIR quantitative models were investigated. The accuracy indexes of NIR quantitative models with calibration set selected by SPXY algorithm were significantly different from that with calibration set selected by CS algorithm or KS algorithm, while the robustness indexes, such as RMSECV and |RMSEP-RMSEC|, were not significantly different. Therefore, SPXY algorithm for calibration set selection could improve the predicative accuracy of NIR quantitative models to determine asiaticoside content in Centella total glucosides, and have no significant effect on the robustness of the models, which provides a reference to determine the appropriate algorithm for calibration set selection when NIR quantitative models are established for the solid system of traditional Chinese medcine.
van der Ploeg, Tjeerd; Nieboer, Daan; Steyerberg, Ewout W
2016-10-01
Prediction of medical outcomes may potentially benefit from using modern statistical modeling techniques. We aimed to externally validate modeling strategies for prediction of 6-month mortality of patients suffering from traumatic brain injury (TBI) with predictor sets of increasing complexity. We analyzed individual patient data from 15 different studies including 11,026 TBI patients. We consecutively considered a core set of predictors (age, motor score, and pupillary reactivity), an extended set with computed tomography scan characteristics, and a further extension with two laboratory measurements (glucose and hemoglobin). With each of these sets, we predicted 6-month mortality using default settings with five statistical modeling techniques: logistic regression (LR), classification and regression trees, random forests (RFs), support vector machines (SVM) and neural nets. For external validation, a model developed on one of the 15 data sets was applied to each of the 14 remaining sets. This process was repeated 15 times for a total of 630 validations. The area under the receiver operating characteristic curve (AUC) was used to assess the discriminative ability of the models. For the most complex predictor set, the LR models performed best (median validated AUC value, 0.757), followed by RF and support vector machine models (median validated AUC value, 0.735 and 0.732, respectively). With each predictor set, the classification and regression trees models showed poor performance (median validated AUC value, <0.7). The variability in performance across the studies was smallest for the RF- and LR-based models (inter quartile range for validated AUC values from 0.07 to 0.10). In the area of predicting mortality from TBI, nonlinear and nonadditive effects are not pronounced enough to make modern prediction methods beneficial. Copyright © 2016 Elsevier Inc. All rights reserved.
Svolos, Patricia; Tsougos, Ioannis; Kyrgias, Georgios; Kappas, Constantine; Theodorou, Kiki
2011-04-01
In this study we sought to evaluate and accent the importance of radiobiological parameter selection and implementation to the normal tissue complication probability (NTCP) models. The relative seriality (RS) and the Lyman-Kutcher-Burman (LKB) models were studied. For each model, a minimum and maximum set of radiobiological parameter sets was selected from the overall published sets applied in literature and a theoretical mean parameter set was computed. In order to investigate the potential model weaknesses in NTCP estimation and to point out the correct use of model parameters, these sets were used as input to the RS and the LKB model, estimating radiation induced complications for a group of 36 breast cancer patients treated with radiotherapy. The clinical endpoint examined was Radiation Pneumonitis. Each model was represented by a certain dose-response range when the selected parameter sets were applied. Comparing the models with their ranges, a large area of coincidence was revealed. If the parameter uncertainties (standard deviation) are included in the models, their area of coincidence might be enlarged, constraining even greater their predictive ability. The selection of the proper radiobiological parameter set for a given clinical endpoint is crucial. Published parameter values are not definite but should be accompanied by uncertainties, and one should be very careful when applying them to the NTCP models. Correct selection and proper implementation of published parameters provides a quite accurate fit of the NTCP models to the considered endpoint.
Rough set classification based on quantum logic
NASA Astrophysics Data System (ADS)
Hassan, Yasser F.
2017-11-01
By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.
Xu, G; Hughes-Oliver, J M; Brooks, J D; Yeatts, J L; Baynes, R E
2013-01-01
Quantitative structure-activity relationship (QSAR) models are being used increasingly in skin permeation studies. The main idea of QSAR modelling is to quantify the relationship between biological activities and chemical properties, and thus to predict the activity of chemical solutes. As a key step, the selection of a representative and structurally diverse training set is critical to the prediction power of a QSAR model. Early QSAR models selected training sets in a subjective way and solutes in the training set were relatively homogenous. More recently, statistical methods such as D-optimal design or space-filling design have been applied but such methods are not always ideal. This paper describes a comprehensive procedure to select training sets from a large candidate set of 4534 solutes. A newly proposed 'Baynes' rule', which is a modification of Lipinski's 'rule of five', was used to screen out solutes that were not qualified for the study. U-optimality was used as the selection criterion. A principal component analysis showed that the selected training set was representative of the chemical space. Gas chromatograph amenability was verified. A model built using the training set was shown to have greater predictive power than a model built using a previous dataset [1].
Kalderstam, Jonas; Edén, Patrik; Bendahl, Pär-Ola; Strand, Carina; Fernö, Mårten; Ohlsson, Mattias
2013-06-01
The concordance index (c-index) is the standard way of evaluating the performance of prognostic models in the presence of censored data. Constructing prognostic models using artificial neural networks (ANNs) is commonly done by training on error functions which are modified versions of the c-index. Our objective was to demonstrate the capability of training directly on the c-index and to evaluate our approach compared to the Cox proportional hazards model. We constructed a prognostic model using an ensemble of ANNs which were trained using a genetic algorithm. The individual networks were trained on a non-linear artificial data set divided into a training and test set both of size 2000, where 50% of the data was censored. The ANNs were also trained on a data set consisting of 4042 patients treated for breast cancer spread over five different medical studies, 2/3 used for training and 1/3 used as a test set. A Cox model was also constructed on the same data in both cases. The two models' c-indices on the test sets were then compared. The ranking performance of the models is additionally presented visually using modified scatter plots. Cross validation on the cancer training set did not indicate any non-linear effects between the covariates. An ensemble of 30 ANNs with one hidden neuron was therefore used. The ANN model had almost the same c-index score as the Cox model (c-index=0.70 and 0.71, respectively) on the cancer test set. Both models identified similarly sized low risk groups with at most 10% false positives, 49 for the ANN model and 60 for the Cox model, but repeated bootstrap runs indicate that the difference was not significant. A significant difference could however be seen when applied on the non-linear synthetic data set. In that case the ANN ensemble managed to achieve a c-index score of 0.90 whereas the Cox model failed to distinguish itself from the random case (c-index=0.49). We have found empirical evidence that ensembles of ANN models can be optimized directly on the c-index. Comparison with a Cox model indicates that near identical performance is achieved on a real cancer data set while on a non-linear data set the ANN model is clearly superior. Copyright © 2013 Elsevier B.V. All rights reserved.
A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks
NASA Astrophysics Data System (ADS)
Haijun, Xiong; Qi, Zhang
2016-08-01
Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.
Application for managing model-based material properties for simulation-based engineering
Hoffman, Edward L [Alameda, CA
2009-03-03
An application for generating a property set associated with a constitutive model of a material includes a first program module adapted to receive test data associated with the material and to extract loading conditions from the test data. A material model driver is adapted to receive the loading conditions and a property set and operable in response to the loading conditions and the property set to generate a model response for the material. A numerical optimization module is adapted to receive the test data and the model response and operable in response to the test data and the model response to generate the property set.
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Multiple organ definition in CT using a Bayesian approach for 3D model fitting
NASA Astrophysics Data System (ADS)
Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.
1995-08-01
Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.
A Model for Semantic Equivalence Discovery for Harmonizing Master Data
NASA Astrophysics Data System (ADS)
Piprani, Baba
IT projects often face the challenge of harmonizing metadata and data so as to have a "single" version of the truth. Determining equivalency of multiple data instances against the given type, or set of types, is mandatory in establishing master data legitimacy in a data set that contains multiple incarnations of instances belonging to the same semantic data record . The results of a real-life application define how measuring criteria and equivalence path determination were established via a set of "probes" in conjunction with a score-card approach. There is a need for a suite of supporting models to help determine master data equivalency towards entity resolution—including mapping models, transform models, selection models, match models, an audit and control model, a scorecard model, a rating model. An ORM schema defines the set of supporting models along with their incarnation into an attribute based model as implemented in an RDBMS.
Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner
NASA Astrophysics Data System (ADS)
Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.
2015-02-01
Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.
ERIC Educational Resources Information Center
Stone, Gregory Ethan; Koskey, Kristin L. K.; Sondergeld, Toni A.
2011-01-01
Typical validation studies on standard setting models, most notably the Angoff and modified Angoff models, have ignored construct development, a critical aspect associated with all conceptualizations of measurement processes. Stone compared the Angoff and objective standard setting (OSS) models and found that Angoff failed to define a legitimate…
Answer Sets in a Fuzzy Equilibrium Logic
NASA Astrophysics Data System (ADS)
Schockaert, Steven; Janssen, Jeroen; Vermeir, Dirk; de Cock, Martine
Since its introduction, answer set programming has been generalized in many directions, to cater to the needs of real-world applications. As one of the most general “classical” approaches, answer sets of arbitrary propositional theories can be defined as models in the equilibrium logic of Pearce. Fuzzy answer set programming, on the other hand, extends answer set programming with the capability of modeling continuous systems. In this paper, we combine the expressiveness of both approaches, and define answer sets of arbitrary fuzzy propositional theories as models in a fuzzification of equilibrium logic. We show that the resulting notion of answer set is compatible with existing definitions, when the syntactic restrictions of the corresponding approaches are met. We furthermore locate the complexity of the main reasoning tasks at the second level of the polynomial hierarchy. Finally, as an illustration of its modeling power, we show how fuzzy equilibrium logic can be used to find strong Nash equilibria.
Issues and Methods for Standard-Setting.
ERIC Educational Resources Information Center
Hambleton, Ronald K.; And Others
Issues involved in standard setting along with methods for standard setting are reviewed, with specific reference to their relevance for criterion referenced testing. Definitions are given of continuum and state models, and traditional and normative standard setting procedures. Since continuum models are considered more appropriate for criterion…
In this study, we investigated how different meteorology data sets impacts nitrogen fate and transport responses in the Soil and Water Assessment Tool (SWAT) model. We used two meteorology data sets: National Climatic Data Center (observed) and Mesoscale Model 5/Weather Research ...
Romañach, Stephanie; Watling, James I.; Fletcher, Robert J.; Speroterra, Carolina; Bucklin, David N.; Brandt, Laura A.; Pearlstine, Leonard G.; Escribano, Yesenia; Mazzotti, Frank J.
2014-01-01
Climate change poses new challenges for natural resource managers. Predictive modeling of species–environment relationships using climate envelope models can enhance our understanding of climate change effects on biodiversity, assist in assessment of invasion risk by exotic organisms, and inform life-history understanding of individual species. While increasing interest has focused on the role of uncertainty in future conditions on model predictions, models also may be sensitive to the initial conditions on which they are trained. Although climate envelope models are usually trained using data on contemporary climate, we lack systematic comparisons of model performance and predictions across alternative climate data sets available for model training. Here, we seek to fill that gap by comparing variability in predictions between two contemporary climate data sets to variability in spatial predictions among three alternative projections of future climate. Overall, correlations between monthly temperature and precipitation variables were very high for both contemporary and future data. Model performance varied across algorithms, but not between two alternative contemporary climate data sets. Spatial predictions varied more among alternative general-circulation models describing future climate conditions than between contemporary climate data sets. However, we did find that climate envelope models with low Cohen's kappa scores made more discrepant spatial predictions between climate data sets for the contemporary period than did models with high Cohen's kappa scores. We suggest conservation planners evaluate multiple performance metrics and be aware of the importance of differences in initial conditions for spatial predictions from climate envelope models.
Finite State Models of Manned Systems: Validation, Simplification, and Extension.
1979-11-01
model a time set is needed. A time set is some set T together with a binary relation defined on T which linearly orders the set. If "model time" is...discrete, so is T ; continuous time is represented by a set corresponding to a subset of the non-negative real numbers. In the following discussion time...defined as sequences, over time, of input and outIut values. The notion of sequences or trajectories is formalized as: AT = xx: T -- Al BT = tyIy: T -4BJ AT
Neiger, Brad L; Thackeray, Rosemary; Fagen, Michael C
2011-03-01
Priority setting is an important component of systematic planning in health promotion and also factors into the development of a comprehensive evaluation plan. The basic priority rating (BPR) model was introduced more than 50 years ago and includes criteria that should be considered in any priority setting approach (i.e., use of predetermined criteria, standardized comparisons, and a rubric that controls bias). Although the BPR model has provided basic direction in priority setting, it does not represent the broad array of data currently available to decision makers. Elements in the model also give more weight to the impact of communicable diseases compared with chronic diseases. For these reasons, several modifications are recommended to improve the BPR model and to better assist health promotion practitioners in the priority setting process. The authors also suggest a new name, BPR 2.0, to represent this revised model.
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
Laminar Diffusion Flame Studies (Ground- and Space-Based Studies)
NASA Technical Reports Server (NTRS)
Dai, Z.; El-Leathy, A. M.; Lin, K.-C.; Sunderland, P. B.; Xu, F.; Faeth, G. M.; Urban, D. L. (Technical Monitor); Yuan, Z.-G. (Technical Monitor)
2000-01-01
Laminar diffusion flames are of interest because they provide model flame systems that are far more tractable for analysis and experiments than more practical turbulent diffusion flames. Certainly, understanding flame processes within laminar diffusion flames must precede understanding these processes in more complex turbulent diffusion flames. In addition, many properties of laminar diffusion flames are directly relevant to turbulent diffusion flames using laminar flamelet concepts. Laminar jet diffusion flame shapes (luminous flame boundaries) have been of particular interest since the classical study of Burke and Schumann because they are a simple nonintrusive measurement that is convenient for evaluating flame structure predictions. Thus, consideration of laminar flame shapes is undertaken in the following, emphasizing conditions where effects of gravity are small, due to the importance of such conditions to practical applications. Another class of interesting properties of laminar diffusion flames are their laminar soot and smoke point properties (i.e., the flame length, fuel flow rate, characteristic residence time, etc., at the onset of soot appearance in the flame (the soot point) and the onset of soot emissions from the flame (the smoke point)). These are useful observable soot properties of nonpremixed flames because they provide a convenient means to rate several aspects of flame sooting properties: the relative propensity of various fuels to produce soot in flames; the relative effects of fuel structure, fuel dilution, flame temperature and ambient pressure on the soot appearance and emission properties of flames; the relative levels of continuum radiation from soot in flames; and effects of the intrusion of gravity (or buoyant motion) on emissions of soot from flames. An important motivation to define conditions for soot emissions is that observations of laminar jet diffusion flames in critical environments, e.g., space shuttle and space station facilities, cannot involve soot emitting flames in order to ensure that test chamber windows used for experimental observations are not blocked by soot deposits, thereby compromising unusually valuable experimental results. Another important motivation to define conditions where soot is present in diffusion flames is that flame chemistry, transport and radiation properties are vastly simplified when soot is absent, making such flames far more tractable for detailed numerical simulations than corresponding soot-containing flames. Motivated by these observations, the objectives of this phase of the investigation were as follows: (1) Observe flame-sheet shapes (the location of the reaction zone near phi=1) of nonluminous (soot free) laminar jet diffusion flames in both still and coflowing air and use these results to develop simplified models of flame-sheet shapes for these conditions; (2) Observe luminous flame boundaries of luminous (soot-containing) laminar jet diffusion flames in both still and coflowing air and use these results to develop simplified models of luminous flame boundaries for these conditions. In order to fix ideas here, maximum luminous flame boundaries at the laminar smoke point conditions were sought, i.e., luminous flame boundaries at the laminar smoke point; (3) Observe effects of coflow on laminar soot- and smoke-point conditions because coflow has been proposed as a means to control soot emissions and minimize the presence of soot in diffusion flames.
Harrison, Luke B; Larsson, Hans C E
2015-03-01
Likelihood-based methods are commonplace in phylogenetic systematics. Although much effort has been directed toward likelihood-based models for molecular data, comparatively less work has addressed models for discrete morphological character (DMC) data. Among-character rate variation (ACRV) may confound phylogenetic analysis, but there have been few analyses of the magnitude and distribution of rate heterogeneity among DMCs. Using 76 data sets covering a range of plants, invertebrate, and vertebrate animals, we used a modified version of MrBayes to test equal, gamma-distributed and lognormally distributed models of ACRV, integrating across phylogenetic uncertainty using Bayesian model selection. We found that in approximately 80% of data sets, unequal-rates models outperformed equal-rates models, especially among larger data sets. Moreover, although most data sets were equivocal, more data sets favored the lognormal rate distribution relative to the gamma rate distribution, lending some support for more complex character correlations than in molecular data. Parsimony estimation of the underlying rate distributions in several data sets suggests that the lognormal distribution is preferred when there are many slowly evolving characters and fewer quickly evolving characters. The commonly adopted four rate category discrete approximation used for molecular data was found to be sufficient to approximate a gamma rate distribution with discrete characters. However, among the two data sets tested that favored a lognormal rate distribution, the continuous distribution was better approximated with at least eight discrete rate categories. Although the effect of rate model on the estimation of topology was difficult to assess across all data sets, it appeared relatively minor between the unequal-rates models for the one data set examined carefully. As in molecular analyses, we argue that researchers should test and adopt the most appropriate model of rate variation for the data set in question. As discrete characters are increasingly used in more sophisticated likelihood-based phylogenetic analyses, it is important that these studies be built on the most appropriate and carefully selected underlying models of evolution. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Fully Characterizing Axially Symmetric Szekeres Models with Three Data Sets
NASA Astrophysics Data System (ADS)
Célérier, Marie-Nöelle Mishra, Priti; Singh, Tejinder P.
2015-01-01
Inhomogeneous exact solutions of General Relativity with zero cosmological constant have been used in the literature to challenge the ΛCDM model. From one patch Lemaître-Tolman-Bondi (LTB) models to axially symmetric quasi-spherical Szekeres (QSS) Swiss-cheese models, some of them are able to reproduce to a good accuracy the cosmological data. It has been shown in the literature that a zero Λ LTB model with a central observer can be fully determined by two data sets. We demonstrate that an axially symmetric zero Λ QSS model with an observer located at the origin can be fully reconstructed from three data sets, number counts, luminosity distance and redshift drift. This is a first step towards a future demonstration involving five data sets and the most general Szekeres model.
Impacts of uncertainties in European gridded precipitation observations on regional climate analysis
Gobiet, Andreas
2016-01-01
ABSTRACT Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio‐temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan‐European data sets and a set that combines eight very high‐resolution station‐based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post‐processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small‐scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate‐mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments. PMID:28111497
Prein, Andreas F; Gobiet, Andreas
2017-01-01
Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio-temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan-European data sets and a set that combines eight very high-resolution station-based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post-processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small-scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate-mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments.
Upper canine inclination influences the aesthetics of a smile.
Bothung, C; Fischer, K; Schiffer, H; Springer, I; Wolfart, S
2015-02-01
This current study investigated which angle of canine inclination (angle between canine tooth axis (CA-line) and the line between the lateral canthus and the ipsilateral labial angle (EM-line)) is perceived to be most attractive in a smile. The second objective was to determine whether laymen and dental experts share the same opinion. A Q-sort assessment was performed with 48 posed smile photographs to obtain two models of neutral facial attractiveness. Two sets of images (1 male model set, 1 female model set), each containing seven images with incrementally altered canine and posterior teeth inclinations, were generated. The images were ranked for attractiveness by three groups (61 laymen, 59 orthodontists, 60 dentists). The images with 0° inclination, that is CA-line (maxillary canine axis) parallel to EM-line (the line formed by the lateral canthus and the ipsilateral corner of the mouth) (male model set: 54·4%; female model set: 38·9%), or -5° (inward) inclination (male model set: 20%; female model set: 29·4%) were perceived to be most attractive within each set. Images showing inward canine inclinations were regarded to be more attractive than those with outward inclinations. Dental experts and laymen were in accordance with the aesthetics. Smiles were perceived to be most attractive when the upper canine tooth axis was parallel to the EM-line. In reconstructive or orthodontic therapy, it is thus important to incline canines more inwardly than outwardly. © 2014 John Wiley & Sons Ltd.
Evaluating Gene Set Enrichment Analysis Via a Hybrid Data Model
Hua, Jianping; Bittner, Michael L.; Dougherty, Edward R.
2014-01-01
Gene set enrichment analysis (GSA) methods have been widely adopted by biological labs to analyze data and generate hypotheses for validation. Most of the existing comparison studies focus on whether the existing GSA methods can produce accurate P-values; however, practitioners are often more concerned with the correct gene-set ranking generated by the methods. The ranking performance is closely related to two critical goals associated with GSA methods: the ability to reveal biological themes and ensuring reproducibility, especially for small-sample studies. We have conducted a comprehensive simulation study focusing on the ranking performance of seven representative GSA methods. We overcome the limitation on the availability of real data sets by creating hybrid data models from existing large data sets. To build the data model, we pick a master gene from the data set to form the ground truth and artificially generate the phenotype labels. Multiple hybrid data models can be constructed from one data set and multiple data sets of smaller sizes can be generated by resampling the original data set. This approach enables us to generate a large batch of data sets to check the ranking performance of GSA methods. Our simulation study reveals that for the proposed data model, the Q2 type GSA methods have in general better performance than other GSA methods and the global test has the most robust results. The properties of a data set play a critical role in the performance. For the data sets with highly connected genes, all GSA methods suffer significantly in performance. PMID:24558298
Fiori, Simone
2007-01-01
Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. PMID:18566641
Influence of polygonal wear of railway wheels on the wheel set axle stress
NASA Astrophysics Data System (ADS)
Wu, Xingwen; Chi, Maoru; Wu, Pingbo
2015-11-01
The coupled vehicle/track dynamic model with the flexible wheel set was developed to investigate the effects of polygonal wear on the dynamic stresses of the wheel set axle. In the model, the railway vehicle was modelled by the rigid multibody dynamics. The wheel set was established by the finite element method to analyse the high-frequency oscillation and dynamic stress of wheel set axle induced by the polygonal wear based on the modal stress recovery method. The slab track model was taken into account in which the rail was described by the Timoshenko beam and the three-dimensional solid finite element was employed to establish the concrete slab. Furthermore, the modal superposition method was adopted to calculate the dynamic response of the track. The wheel/rail normal forces and the tangent forces were, respectively, determined by the Hertz nonlinear contact theory and the Shen-Hedrick-Elkins model. Using the coupled vehicle/track dynamic model, the dynamic stresses of wheel set axle with consideration of the ideal polygonal wear and measured polygonal wear were investigated. The results show that the amplitude of wheel/rail normal forces and the dynamic stress of wheel set axle increase as the vehicle speeds rise. Moreover, the impact loads induced by the polygonal wear could excite the resonance of wheel set axle. In the resonance region, the amplitude of the dynamic stress for the wheel set axle would increase considerably comparing with the normal conditions.
Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set
NASA Astrophysics Data System (ADS)
Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.
2017-05-01
A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.
Toropov, A A; Toropova, A P; Raska, I
2008-04-01
Simplified molecular input line entry system (SMILES) has been utilized in constructing quantitative structure-property relationships (QSPR) for octanol/water partition coefficient of vitamins and organic compounds of different classes by optimal descriptors. Statistical characteristics of the best model (vitamins) are the following: n=17, R(2)=0.9841, s=0.634, F=931 (training set); n=7, R(2)=0.9928, s=0.773, F=690 (test set). Using this approach for modeling octanol/water partition coefficient for a set of organic compounds gives a model that is statistically characterized by n=69, R(2)=0.9872, s=0.156, F=5184 (training set) and n=70, R(2)=0.9841, s=0.179, F=4195 (test set).
Halpern, Yoni; Jernite, Yacine; Shapiro, Nathan I.; Nathanson, Larry A.
2017-01-01
Objective To demonstrate the incremental benefit of using free text data in addition to vital sign and demographic data to identify patients with suspected infection in the emergency department. Methods This was a retrospective, observational cohort study performed at a tertiary academic teaching hospital. All consecutive ED patient visits between 12/17/08 and 2/17/13 were included. No patients were excluded. The primary outcome measure was infection diagnosed in the emergency department defined as a patient having an infection related ED ICD-9-CM discharge diagnosis. Patients were randomly allocated to train (64%), validate (20%), and test (16%) data sets. After preprocessing the free text using bigram and negation detection, we built four models to predict infection, incrementally adding vital signs, chief complaint, and free text nursing assessment. We used two different methods to represent free text: a bag of words model and a topic model. We then used a support vector machine to build the prediction model. We calculated the area under the receiver operating characteristic curve to compare the discriminatory power of each model. Results A total of 230,936 patient visits were included in the study. Approximately 14% of patients had the primary outcome of diagnosed infection. The area under the ROC curve (AUC) for the vitals model, which used only vital signs and demographic data, was 0.67 for the training data set, 0.67 for the validation data set, and 0.67 (95% CI 0.65–0.69) for the test data set. The AUC for the chief complaint model which also included demographic and vital sign data was 0.84 for the training data set, 0.83 for the validation data set, and 0.83 (95% CI 0.81–0.84) for the test data set. The best performing methods made use of all of the free text. In particular, the AUC for the bag-of-words model was 0.89 for training data set, 0.86 for the validation data set, and 0.86 (95% CI 0.85–0.87) for the test data set. The AUC for the topic model was 0.86 for the training data set, 0.86 for the validation data set, and 0.85 (95% CI 0.84–0.86) for the test data set. Conclusion Compared to previous work that only used structured data such as vital signs and demographic information, utilizing free text drastically improves the discriminatory ability (increase in AUC from 0.67 to 0.86) of identifying infection. PMID:28384212
Horng, Steven; Sontag, David A; Halpern, Yoni; Jernite, Yacine; Shapiro, Nathan I; Nathanson, Larry A
2017-01-01
To demonstrate the incremental benefit of using free text data in addition to vital sign and demographic data to identify patients with suspected infection in the emergency department. This was a retrospective, observational cohort study performed at a tertiary academic teaching hospital. All consecutive ED patient visits between 12/17/08 and 2/17/13 were included. No patients were excluded. The primary outcome measure was infection diagnosed in the emergency department defined as a patient having an infection related ED ICD-9-CM discharge diagnosis. Patients were randomly allocated to train (64%), validate (20%), and test (16%) data sets. After preprocessing the free text using bigram and negation detection, we built four models to predict infection, incrementally adding vital signs, chief complaint, and free text nursing assessment. We used two different methods to represent free text: a bag of words model and a topic model. We then used a support vector machine to build the prediction model. We calculated the area under the receiver operating characteristic curve to compare the discriminatory power of each model. A total of 230,936 patient visits were included in the study. Approximately 14% of patients had the primary outcome of diagnosed infection. The area under the ROC curve (AUC) for the vitals model, which used only vital signs and demographic data, was 0.67 for the training data set, 0.67 for the validation data set, and 0.67 (95% CI 0.65-0.69) for the test data set. The AUC for the chief complaint model which also included demographic and vital sign data was 0.84 for the training data set, 0.83 for the validation data set, and 0.83 (95% CI 0.81-0.84) for the test data set. The best performing methods made use of all of the free text. In particular, the AUC for the bag-of-words model was 0.89 for training data set, 0.86 for the validation data set, and 0.86 (95% CI 0.85-0.87) for the test data set. The AUC for the topic model was 0.86 for the training data set, 0.86 for the validation data set, and 0.85 (95% CI 0.84-0.86) for the test data set. Compared to previous work that only used structured data such as vital signs and demographic information, utilizing free text drastically improves the discriminatory ability (increase in AUC from 0.67 to 0.86) of identifying infection.
Target modelling for SAR image simulation
NASA Astrophysics Data System (ADS)
Willis, Chris J.
2014-10-01
This paper examines target models that might be used in simulations of Synthetic Aperture Radar imagery. We examine the basis for scattering phenomena in SAR, and briefly review the Swerling target model set, before considering extensions to this set discussed in the literature. Methods for simulating and extracting parameters for the extended Swerling models are presented. It is shown that in many cases the more elaborate extended Swerling models can be represented, to a high degree of fidelity, by simpler members of the model set. Further, it is shown that it is quite unlikely that these extended models would be selected when fitting models to typical data samples.
Improving predicted protein loop structure ranking using a Pareto-optimality consensus method.
Li, Yaohang; Rata, Ionel; Chiu, See-wing; Jakobsson, Eric
2010-07-20
Accurate protein loop structure models are important to understand functions of many proteins. Identifying the native or near-native models by distinguishing them from the misfolded ones is a critical step in protein loop structure prediction. We have developed a Pareto Optimal Consensus (POC) method, which is a consensus model ranking approach to integrate multiple knowledge- or physics-based scoring functions. The procedure of identifying the models of best quality in a model set includes: 1) identifying the models at the Pareto optimal front with respect to a set of scoring functions, and 2) ranking them based on the fuzzy dominance relationship to the rest of the models. We apply the POC method to a large number of decoy sets for loops of 4- to 12-residue in length using a functional space composed of several carefully-selected scoring functions: Rosetta, DOPE, DDFIRE, OPLS-AA, and a triplet backbone dihedral potential developed in our lab. Our computational results show that the sets of Pareto-optimal decoys, which are typically composed of approximately 20% or less of the overall decoys in a set, have a good coverage of the best or near-best decoys in more than 99% of the loop targets. Compared to the individual scoring function yielding best selection accuracy in the decoy sets, the POC method yields 23%, 37%, and 64% less false positives in distinguishing the native conformation, indentifying a near-native model (RMSD < 0.5A from the native) as top-ranked, and selecting at least one near-native model in the top-5-ranked models, respectively. Similar effectiveness of the POC method is also found in the decoy sets from membrane protein loops. Furthermore, the POC method outperforms the other popularly-used consensus strategies in model ranking, such as rank-by-number, rank-by-rank, rank-by-vote, and regression-based methods. By integrating multiple knowledge- and physics-based scoring functions based on Pareto optimality and fuzzy dominance, the POC method is effective in distinguishing the best loop models from the other ones within a loop model set.
Improving predicted protein loop structure ranking using a Pareto-optimality consensus method
2010-01-01
Background Accurate protein loop structure models are important to understand functions of many proteins. Identifying the native or near-native models by distinguishing them from the misfolded ones is a critical step in protein loop structure prediction. Results We have developed a Pareto Optimal Consensus (POC) method, which is a consensus model ranking approach to integrate multiple knowledge- or physics-based scoring functions. The procedure of identifying the models of best quality in a model set includes: 1) identifying the models at the Pareto optimal front with respect to a set of scoring functions, and 2) ranking them based on the fuzzy dominance relationship to the rest of the models. We apply the POC method to a large number of decoy sets for loops of 4- to 12-residue in length using a functional space composed of several carefully-selected scoring functions: Rosetta, DOPE, DDFIRE, OPLS-AA, and a triplet backbone dihedral potential developed in our lab. Our computational results show that the sets of Pareto-optimal decoys, which are typically composed of ~20% or less of the overall decoys in a set, have a good coverage of the best or near-best decoys in more than 99% of the loop targets. Compared to the individual scoring function yielding best selection accuracy in the decoy sets, the POC method yields 23%, 37%, and 64% less false positives in distinguishing the native conformation, indentifying a near-native model (RMSD < 0.5A from the native) as top-ranked, and selecting at least one near-native model in the top-5-ranked models, respectively. Similar effectiveness of the POC method is also found in the decoy sets from membrane protein loops. Furthermore, the POC method outperforms the other popularly-used consensus strategies in model ranking, such as rank-by-number, rank-by-rank, rank-by-vote, and regression-based methods. Conclusions By integrating multiple knowledge- and physics-based scoring functions based on Pareto optimality and fuzzy dominance, the POC method is effective in distinguishing the best loop models from the other ones within a loop model set. PMID:20642859
Comparison of eigenvectors for coupled seismo-electromagnetic layered-Earth modelling
NASA Astrophysics Data System (ADS)
Grobbe, N.; Slob, E. C.; Thorbecke, J. W.
2016-07-01
We study the accuracy and numerical stability of three eigenvector sets for modelling the coupled poroelastic and electromagnetic layered-Earth response. We use a known eigenvector set, its flux-normalized version and a newly derived flux-normalized set. The new set is chosen such that the system is properly uncoupled when the coupling between the poroelastic and electromagnetic fields vanishes. We carry out two different numerical stability tests: the first test focuses on the internal system, eigenvector and eigenvalue consistency; the second test investigates the stability and preciseness of the flux-normalized systems by looking at identity relations. We find that the known set shows the largest deviation for both tests, whereas the new set performs best. In two additional numerical modelling experiments, these numerical inaccuracies are shown to generate numerical noise levels comparable to small signals, such as signals coming from the important interface conversion responses, especially when the coupling coefficient is small. When coupling vanishes completely, the known set does not produce proper results. The new set produces numerically stable and accurate results in all situations. We therefore strongly recommend to use this newly derived set for future layered-Earth seismo-electromagnetic modelling experiments.
NASA Technical Reports Server (NTRS)
Gore, Brian Francis; Hooey, Becky Lee; Haan, Nancy; Socash, Connie; Mahlstedt, Eric; Foyle, David C.
2013-01-01
The Closely Spaced Parallel Operations (CSPO) scenario is a complex, human performance model scenario that tested alternate operator roles and responsibilities to a series of off-nominal operations on approach and landing (see Gore, Hooey, Mahlstedt, Foyle, 2013). The model links together the procedures, equipment, crewstation, and external environment to produce predictions of operator performance in response to Next Generation system designs, like those expected in the National Airspaces NextGen concepts. The task analysis that is contained in the present report comes from the task analysis window in the MIDAS software. These tasks link definitions and states for equipment components, environmental features as well as operational contexts. The current task analysis culminated in 3300 tasks that included over 1000 Subject Matter Expert (SME)-vetted, re-usable procedural sets for three critical phases of flight; the Descent, Approach, and Land procedural sets (see Gore et al., 2011 for a description of the development of the tasks included in the model; Gore, Hooey, Mahlstedt, Foyle, 2013 for a description of the model, and its results; Hooey, Gore, Mahlstedt, Foyle, 2013 for a description of the guidelines that were generated from the models results; Gore, Hooey, Foyle, 2012 for a description of the models implementation and its settings). The rollout, after landing checks, taxi to gate and arrive at gate illustrated in Figure 1 were not used in the approach and divert scenarios exercised. The other networks in Figure 1 set up appropriate context settings for the flight deck.The current report presents the models task decomposition from the tophighest level and decomposes it to finer-grained levels. The first task that is completed by the model is to set all of the initial settings for the scenario runs included in the model (network 75 in Figure 1). This initialization process also resets the CAD graphic files contained with MIDAS, as well as the embedded operator models that comprise MIDAS. Following the initial settings, the model progresses to begin the first tasks required of the two flight deck operators, the Captain (CA) and the First Officer (FO). The task sets will initialize operator specific settings prior to loading all of the alerts, probes, and other events that occur in the scenario. As a note, the CA and FO were terms used in developing this model but the CA can also be thought of as the Pilot Flying (PF), while the FO can be considered the Pilot-Not-Flying (PNF)or Pilot Monitoring (PM). As such, the document refers to the operators as PFCA and PNFFO respectively.
Fatemi, Mohammad Hossein; Ghorbanzad'e, Mehdi
2009-11-01
Quantitative structure-property relationship models for the prediction of the nematic transition temperature (T (N)) were developed by using multilinear regression analysis and a feedforward artificial neural network (ANN). A collection of 42 thermotropic liquid crystals was chosen as the data set. The data set was divided into three sets: for training, and an internal and external test set. Training and internal test sets were used for ANN model development, and the external test set was used for evaluation of the predictive power of the model. In order to build the models, a set of six descriptors were selected by the best multilinear regression procedure of the CODESSA program. These descriptors were: atomic charge weighted partial negatively charged surface area, relative negative charged surface area, polarity parameter/square distance, minimum most negative atomic partial charge, molecular volume, and the A component of moment of inertia, which encode geometrical and electronic characteristics of molecules. These descriptors were used as inputs to ANN. The optimized ANN model had 6:6:1 topology. The standard errors in the calculation of T (N) for the training, internal, and external test sets using the ANN model were 1.012, 4.910, and 4.070, respectively. To further evaluate the ANN model, a crossvalidation test was performed, which produced the statistic Q (2) = 0.9796 and standard deviation of 2.67 based on predicted residual sum of square. Also, the diversity test was performed to ensure the model's stability and prove its predictive capability. The obtained results reveal the suitability of ANN for the prediction of T (N) for liquid crystals using molecular structural descriptors.
Interdicting an Adversary’s Economy Viewed As a Trade Sanction Inoperability Input Output Model
2017-03-01
set of sectors. The design of an economic sanction, in the context of this thesis, is the selection of the sector or set of sectors to sanction...We propose two optimization models. The first, the Trade Sanction Inoperability Input-output Model (TS-IIM), selects the sector or set of sectors that...Interdependency analysis: Extensions to demand reduction inoperability input-output modeling and portfolio selection . Unpublished doctoral dissertation
Rosello, Alicia; Horner, Carolyne; Hopkins, Susan; Hayward, Andrew C; Deeny, Sarah R
2017-02-01
OBJECTIVES (1) To systematically search for all dynamic mathematical models of infectious disease transmission in long-term care facilities (LTCFs); (2) to critically evaluate models of interventions against antimicrobial resistance (AMR) in this setting; and (3) to develop a checklist for hospital epidemiologists and policy makers by which to distinguish good quality models of AMR in LTCFs. METHODS The CINAHL, EMBASE, Global Health, MEDLINE, and Scopus databases were systematically searched for studies of dynamic mathematical models set in LTCFs. Models of interventions targeting methicillin-resistant Staphylococcus aureus in LTCFs were critically assessed. Using this analysis, we developed a checklist for good quality mathematical models of AMR in LTCFs. RESULTS AND DISCUSSION Overall, 18 papers described mathematical models that characterized the spread of infectious diseases in LTCFs, but no models of AMR in gram-negative bacteria in this setting were described. Future models of AMR in LTCFs require a more robust methodology (ie, formal model fitting to data and validation), greater transparency regarding model assumptions, setting-specific data, realistic and current setting-specific parameters, and inclusion of movement dynamics between LTCFs and hospitals. CONCLUSIONS Mathematical models of AMR in gram-negative bacteria in the LTCF setting, where these bacteria are increasingly becoming prevalent, are needed to help guide infection prevention and control. Improvements are required to develop outputs of sufficient quality to help guide interventions and policy in the future. We suggest a checklist of criteria to be used as a practical guide to determine whether a model is robust enough to test policy. Infect Control Hosp Epidemiol 2017;38:216-225.
Analyzing ROC curves using the effective set-size model
NASA Astrophysics Data System (ADS)
Samuelson, Frank W.; Abbey, Craig K.; He, Xin
2018-03-01
The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical imaging tasks.
Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.
Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David
2018-07-01
To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP modeling. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
Gene set analysis using variance component tests.
Huang, Yen-Tsung; Lin, Xihong
2013-06-28
Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.
Should I use that model? Assessing the transferability of ecological models to new settings
Analysts and scientists frequently apply existing models that estimate ecological endpoints or simulate ecological processes to settings where the models have not been used previously, and where data to parameterize and validate the model may be sparse. Prior to transferring an ...
Spatial frequency dependence of target signature for infrared performance modeling
NASA Astrophysics Data System (ADS)
Du Bosq, Todd; Olson, Jeffrey
2011-05-01
The standard model used to describe the performance of infrared imagers is the U.S. Army imaging system target acquisition model, based on the targeting task performance metric. The model is characterized by the resolution and sensitivity of the sensor as well as the contrast and task difficulty of the target set. The contrast of the target is defined as a spatial average contrast. The model treats the contrast of the target set as spatially white, or constant, over the bandlimit of the sensor. Previous experiments have shown that this assumption is valid under normal conditions and typical target sets. However, outside of these conditions, the treatment of target signature can become the limiting factor affecting model performance accuracy. This paper examines target signature more carefully. The spatial frequency dependence of the standard U.S. Army RDECOM CERDEC Night Vision 12 and 8 tracked vehicle target sets is described. The results of human perception experiments are modeled and evaluated using both frequency dependent and independent target signature definitions. Finally the function of task difficulty and its relationship to a target set is discussed.
Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments
NASA Astrophysics Data System (ADS)
Lane, Peter C. R.; Gobet, Fernand
2013-03-01
Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.
Toropov, Andrey A; Toropova, Alla P; Raska, Ivan; Benfenati, Emilio
2010-04-01
Three different splits into the subtraining set (n = 22), the set of calibration (n = 21), and the test set (n = 12) of 55 antineoplastic agents have been examined. By the correlation balance of SMILES-based optimal descriptors quite satisfactory models for the octanol/water partition coefficient have been obtained on all three splits. The correlation balance is the optimization of a one-variable model with a target function that provides both the maximal values of the correlation coefficient for the subtraining and calibration set and the minimum of the difference between the above-mentioned correlation coefficients. Thus, the calibration set is a preliminary test set. Copyright (c) 2009 Elsevier Masson SAS. All rights reserved.
Hamann, Hendrik F.; Hwang, Youngdeok; van Kessel, Theodore G.; Khabibrakhmanov, Ildar K.; Muralidhar, Ramachandran
2016-10-18
A method and a system to perform multi-model blending are described. The method includes obtaining one or more sets of predictions of historical conditions, the historical conditions corresponding with a time T that is historical in reference to current time, and the one or more sets of predictions of the historical conditions being output by one or more models. The method also includes obtaining actual historical conditions, the actual historical conditions being measured conditions at the time T, assembling a training data set including designating the two or more set of predictions of historical conditions as predictor variables and the actual historical conditions as response variables, and training a machine learning algorithm based on the training data set. The method further includes obtaining a blended model based on the machine learning algorithm.
Modeling radium and radon transport through soil and vegetation
Kozak, J.A.; Reeves, H.W.; Lewis, B.A.
2003-01-01
A one-dimensional flow and transport model was developed to describe the movement of two fluid phases, gas and water, within a porous medium and the transport of 226Ra and 222Rn within and between these two phases. Included in this model is the vegetative uptake of water and aqueous 226Ra and 222Rn that can be extracted from the soil via the transpiration stream. The mathematical model is formulated through a set of phase balance equations and a set of species balance equations. Mass exchange, sink terms and the dependence of physical properties upon phase composition couple the two sets of equations. Numerical solution of each set, with iteration between the sets, is carried out leading to a set-iterative compositional model. The Petrov-Galerkin finite element approach is used to allow for upstream weighting if required for a given simulation. Mass lumping improves solution convergence and stability behavior. The resulting numerical model was applied to four problems and was found to produce accurate, mass conservative solutions when compared to published experimental and numerical results and theoretical column experiments. Preliminary results suggest that the model can be used as an investigative tool to determine the feasibility of phytoremediating radium and radon-contaminated soil. ?? 2003 Elsevier Science B.V. All rights reserved.
ERIC Educational Resources Information Center
Terry, Laura Robin
2012-01-01
The implementation of the American School Counselor Association (ASCA) national model has not been studied in nontraditional settings such as in virtual schools. The purpose of this quantitative research study was to examine the implementation of the career domain of the ASCA national model into the virtual high school setting. Social cognitive…
ERIC Educational Resources Information Center
Beheshti, Behzad; Desmarais, Michel C.
2015-01-01
This study investigates the issue of the goodness of fit of different skills assessment models using both synthetic and real data. Synthetic data is generated from the different skills assessment models. The results show wide differences of performances between the skills assessment models over synthetic data sets. The set of relative performances…
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2012 CFR
2012-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2014 CFR
2014-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2011 CFR
2011-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2013 CFR
2013-07-01
... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...
ERIC Educational Resources Information Center
Haverland, Edgar M.
The report describes a project designed to facilitate the transfer and utilization of training technology by developing a model for evaluating training approaches or innovtions in relation to the requirements, resources, and constraints of specific training settings. The model consists of two parallel sets of open-ended questions--one set…
Improving the process of process modelling by the use of domain process patterns
NASA Astrophysics Data System (ADS)
Koschmider, Agnes; Reijers, Hajo A.
2015-01-01
The use of business process models has become prevalent in a wide area of enterprise applications. But while their popularity is expanding, concerns are growing with respect to their proper creation and maintenance. An obvious way to boost the efficiency of creating high-quality business process models would be to reuse relevant parts of existing models. At this point, however, limited support exists to guide process modellers towards the usage of appropriate model content. In this paper, a set of content-oriented patterns is presented, which is extracted from a large set of process models from the order management and manufacturing production domains. The patterns are derived using a newly proposed set of algorithms, which are being discussed in this paper. The authors demonstrate how such Domain Process Patterns, in combination with information on their historic usage, can support process modellers in generating new models. To support the wider dissemination and development of Domain Process Patterns within and beyond the studied domains, an accompanying website has been set up.
Improvement of Predictive Ability by Uniform Coverage of the Target Genetic Space
Bustos-Korts, Daniela; Malosetti, Marcos; Chapman, Scott; Biddulph, Ben; van Eeuwijk, Fred
2016-01-01
Genome-enabled prediction provides breeders with the means to increase the number of genotypes that can be evaluated for selection. One of the major challenges in genome-enabled prediction is how to construct a training set of genotypes from a calibration set that represents the target population of genotypes, where the calibration set is composed of a training and validation set. A random sampling protocol of genotypes from the calibration set will lead to low quality coverage of the total genetic space by the training set when the calibration set contains population structure. As a consequence, predictive ability will be affected negatively, because some parts of the genotypic diversity in the target population will be under-represented in the training set, whereas other parts will be over-represented. Therefore, we propose a training set construction method that uniformly samples the genetic space spanned by the target population of genotypes, thereby increasing predictive ability. To evaluate our method, we constructed training sets alongside with the identification of corresponding genomic prediction models for four genotype panels that differed in the amount of population structure they contained (maize Flint, maize Dent, wheat, and rice). Training sets were constructed using uniform sampling, stratified-uniform sampling, stratified sampling and random sampling. We compared these methods with a method that maximizes the generalized coefficient of determination (CD). Several training set sizes were considered. We investigated four genomic prediction models: multi-locus QTL models, GBLUP models, combinations of QTL and GBLUPs, and Reproducing Kernel Hilbert Space (RKHS) models. For the maize and wheat panels, construction of the training set under uniform sampling led to a larger predictive ability than under stratified and random sampling. The results of our methods were similar to those of the CD method. For the rice panel, all training set construction methods led to similar predictive ability, a reflection of the very strong population structure in this panel. PMID:27672112
Modeling Epidemics Spreading on Social Contact Networks.
Zhang, Zhaoyang; Wang, Honggang; Wang, Chonggang; Fang, Hua
2015-09-01
Social contact networks and the way people interact with each other are the key factors that impact on epidemics spreading. However, it is challenging to model the behavior of epidemics based on social contact networks due to their high dynamics. Traditional models such as susceptible-infected-recovered (SIR) model ignore the crowding or protection effect and thus has some unrealistic assumption. In this paper, we consider the crowding or protection effect and develop a novel model called improved SIR model. Then, we use both deterministic and stochastic models to characterize the dynamics of epidemics on social contact networks. The results from both simulations and real data set conclude that the epidemics are more likely to outbreak on social contact networks with higher average degree. We also present some potential immunization strategies, such as random set immunization, dominating set immunization, and high degree set immunization to further prove the conclusion.
Deeb, Omar; Shaik, Basheerulla; Agrawal, Vijay K
2014-10-01
Quantitative Structure-Activity Relationship (QSAR) models for binding affinity constants (log Ki) of 78 flavonoid ligands towards the benzodiazepine site of GABA (A) receptor complex were calculated using the machine learning methods: artificial neural network (ANN) and support vector machine (SVM) techniques. The models obtained were compared with those obtained using multiple linear regression (MLR) analysis. The descriptor selection and model building were performed with 10-fold cross-validation using the training data set. The SVM and MLR coefficient of determination values are 0.944 and 0.879, respectively, for the training set and are higher than those of ANN models. Though the SVM model shows improvement of training set fitting, the ANN model was superior to SVM and MLR in predicting the test set. Randomization test is employed to check the suitability of the models.
Modeling Epidemics Spreading on Social Contact Networks
ZHANG, ZHAOYANG; WANG, HONGGANG; WANG, CHONGGANG; FANG, HUA
2016-01-01
Social contact networks and the way people interact with each other are the key factors that impact on epidemics spreading. However, it is challenging to model the behavior of epidemics based on social contact networks due to their high dynamics. Traditional models such as susceptible-infected-recovered (SIR) model ignore the crowding or protection effect and thus has some unrealistic assumption. In this paper, we consider the crowding or protection effect and develop a novel model called improved SIR model. Then, we use both deterministic and stochastic models to characterize the dynamics of epidemics on social contact networks. The results from both simulations and real data set conclude that the epidemics are more likely to outbreak on social contact networks with higher average degree. We also present some potential immunization strategies, such as random set immunization, dominating set immunization, and high degree set immunization to further prove the conclusion. PMID:27722037
Analysis of precision and accuracy in a simple model of machine learning
NASA Astrophysics Data System (ADS)
Lee, Julian
2017-12-01
Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.
Rational selection of training and test sets for the development of validated QSAR models
NASA Astrophysics Data System (ADS)
Golbraikh, Alexander; Shen, Min; Xiao, Zhiyan; Xiao, Yun-De; Lee, Kuo-Hsiung; Tropsha, Alexander
2003-02-01
Quantitative Structure-Activity Relationship (QSAR) models are used increasingly to screen chemical databases and/or virtual chemical libraries for potentially bioactive molecules. These developments emphasize the importance of rigorous model validation to ensure that the models have acceptable predictive power. Using k nearest neighbors ( kNN) variable selection QSAR method for the analysis of several datasets, we have demonstrated recently that the widely accepted leave-one-out (LOO) cross-validated R2 (q2) is an inadequate characteristic to assess the predictive ability of the models [Golbraikh, A., Tropsha, A. Beware of q2! J. Mol. Graphics Mod. 20, 269-276, (2002)]. Herein, we provide additional evidence that there exists no correlation between the values of q 2 for the training set and accuracy of prediction ( R 2) for the test set and argue that this observation is a general property of any QSAR model developed with LOO cross-validation. We suggest that external validation using rationally selected training and test sets provides a means to establish a reliable QSAR model. We propose several approaches to the division of experimental datasets into training and test sets and apply them in QSAR studies of 48 functionalized amino acid anticonvulsants and a series of 157 epipodophyllotoxin derivatives with antitumor activity. We formulate a set of general criteria for the evaluation of predictive power of QSAR models.
Comparisons of thermospheric density data sets and models
NASA Astrophysics Data System (ADS)
Doornbos, Eelco; van Helleputte, Tom; Emmert, John; Drob, Douglas; Bowman, Bruce R.; Pilinski, Marcin
During the past decade, continuous long-term data sets of thermospheric density have become available to researchers. These data sets have been derived from accelerometer measurements made by the CHAMP and GRACE satellites and from Space Surveillance Network (SSN) tracking data and related Two-Line Element (TLE) sets. These data have already resulted in a large number of publications on physical interpretation and improvement of empirical density modelling. This study compares four different density data sets and two empirical density models, for the period 2002-2009. These data sources are the CHAMP (1) and GRACE (2) accelerometer measurements, the long-term database of densities derived from TLE data (3), the High Accuracy Satellite Drag Model (4) run by Air Force Space Command, calibrated using SSN data, and the NRLMSISE-00 (5) and Jacchia-Bowman 2008 (6) empirical models. In describing these data sets and models, specific attention is given to differences in the geo-metrical and aerodynamic satellite modelling, applied in the conversion from drag to density measurements, which are main sources of density biases. The differences in temporal and spa-tial resolution of the density data sources are also described and taken into account. With these aspects in mind, statistics of density comparisons have been computed, both as a function of solar and geomagnetic activity levels, and as a function of latitude and local solar time. These statistics give a detailed view of the relative accuracy of the different data sets and of the biases between them. The differences are analysed with the aim at providing rough error bars on the data and models and pinpointing issues which could receive attention in future iterations of data processing algorithms and in future model development.
Convex set and linear mixing model
NASA Technical Reports Server (NTRS)
Xu, P.; Greeley, R.
1993-01-01
A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.
Cunningham, Albert R.; Trent, John O.
2012-01-01
Structure–activity relationship (SAR) models are powerful tools to investigate the mechanisms of action of chemical carcinogens and to predict the potential carcinogenicity of untested compounds. We describe the use of a traditional fragment-based SAR approach along with a new virtual ligand-protein interaction-based approach for modeling of nonmutagenic carcinogens. The ligand-based SAR models used descriptors derived from computationally calculated ligand-binding affinities for learning set agents to 5495 proteins. Two learning sets were developed. One set was from the Carcinogenic Potency Database, where chemicals tested for rat carcinogenesis along with Salmonella mutagenicity data were provided. The second was from Malacarne et al. who developed a learning set of nonalerting compounds based on rodent cancer bioassay data and Ashby’s structural alerts. When the rat cancer models were categorized based on mutagenicity, the traditional fragment model outperformed the ligand-based model. However, when the learning sets were composed solely of nonmutagenic or nonalerting carcinogens and noncarcinogens, the fragment model demonstrated a concordance of near 50%, whereas the ligand-based models demonstrated a concordance of 71% for nonmutagenic carcinogens and 74% for nonalerting carcinogens. Overall, these findings suggest that expert system analysis of virtual chemical protein interactions may be useful for developing predictive SAR models for nonmutagenic carcinogens. Moreover, a more practical approach for developing SAR models for carcinogenesis may include fragment-based models for chemicals testing positive for mutagenicity and ligand-based models for chemicals devoid of DNA reactivity. PMID:22678118
Cunningham, Albert R; Carrasquer, C Alex; Qamar, Shahid; Maguire, Jon M; Cunningham, Suzanne L; Trent, John O
2012-10-01
Structure-activity relationship (SAR) models are powerful tools to investigate the mechanisms of action of chemical carcinogens and to predict the potential carcinogenicity of untested compounds. We describe the use of a traditional fragment-based SAR approach along with a new virtual ligand-protein interaction-based approach for modeling of nonmutagenic carcinogens. The ligand-based SAR models used descriptors derived from computationally calculated ligand-binding affinities for learning set agents to 5495 proteins. Two learning sets were developed. One set was from the Carcinogenic Potency Database, where chemicals tested for rat carcinogenesis along with Salmonella mutagenicity data were provided. The second was from Malacarne et al. who developed a learning set of nonalerting compounds based on rodent cancer bioassay data and Ashby's structural alerts. When the rat cancer models were categorized based on mutagenicity, the traditional fragment model outperformed the ligand-based model. However, when the learning sets were composed solely of nonmutagenic or nonalerting carcinogens and noncarcinogens, the fragment model demonstrated a concordance of near 50%, whereas the ligand-based models demonstrated a concordance of 71% for nonmutagenic carcinogens and 74% for nonalerting carcinogens. Overall, these findings suggest that expert system analysis of virtual chemical protein interactions may be useful for developing predictive SAR models for nonmutagenic carcinogens. Moreover, a more practical approach for developing SAR models for carcinogenesis may include fragment-based models for chemicals testing positive for mutagenicity and ligand-based models for chemicals devoid of DNA reactivity.
Parameter Estimation for Thurstone Choice Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vojnovic, Milan; Yun, Seyoung
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one ormore » more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.« less
A Comparison of Graded Response and Rasch Partial Credit Models with Subjective Well-Being.
ERIC Educational Resources Information Center
Baker, John G.; Rounds, James B.; Zevon, Michael A.
2000-01-01
Compared two multiple category item response theory models using a data set of 52 mood terms with 713 undergraduate psychology students. Comparative model fit for the Samejima (F. Samejima, 1966) logistic model for graded responses and the Masters (G. Masters, 1982) partial credit model favored the former model for this data set. (SLD)
NASA Technical Reports Server (NTRS)
Crutcher, H. L.; Falls, L. W.
1976-01-01
Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.
ERIC Educational Resources Information Center
Kaliski, Pamela; Wind, Stefanie A.; Engelhard, George, Jr.; Morgan, Deanna; Plake, Barbara; Reshetar, Rosemary
2012-01-01
The Many-Facet Rasch (MFR) Model is traditionally used to evaluate the quality of ratings on constructed response assessments; however, it can also be used to evaluate the quality of judgments from panel-based standard setting procedures. The current study illustrates the use of the MFR Model by examining the quality of ratings obtained from a…
A Logical Difficulty of the Parameter Setting Model.
ERIC Educational Resources Information Center
Sasaki, Yoshinori
1990-01-01
Seeks to prove that the parameter setting model (PSM) of Chomsky's Universal Grammar theory contains an internal contradiction when it is seriously taken to model the internal state of language learners. (six references) (JL)
NASA Technical Reports Server (NTRS)
Kubat, Greg; Vandrei, Don
2006-01-01
Project Objectives include: a) CNS Model Development; b Design/Integration of baseline set of CNS Models into ACES; c) Implement Enhanced Simulation Capabilities in ACES; d) Design and Integration of Enhanced (2nd set) CNS Models; and e) Continue with CNS Model Integration/Concept evaluations.
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Tsuha, Walter S.
1993-01-01
A two-stage model reduction methodology, combining the classical Component Mode Synthesis (CMS) method and the newly developed Enhanced Projection and Assembly (EP&A) method, is proposed in this research. The first stage of this methodology, called the COmponent Modes Projection and Assembly model REduction (COMPARE) method, involves the generation of CMS mode sets, such as the MacNeal-Rubin mode sets. These mode sets are then used to reduce the order of each component model in the Rayleigh-Ritz sense. The resultant component models are then combined to generate reduced-order system models at various system configurations. A composite mode set which retains important system modes at all system configurations is then selected from these reduced-order system models. In the second stage, the EP&A model reduction method is employed to reduce further the order of the system model generated in the first stage. The effectiveness of the COMPARE methodology has been successfully demonstrated on a high-order, finite-element model of the cruise-configured Galileo spacecraft.
NASA Astrophysics Data System (ADS)
Gampe, D.; Ludwig, R.
2017-12-01
Regional Climate Models (RCMs) that downscale General Circulation Models (GCMs) are the primary tool to project future climate and serve as input to many impact models to assess the related changes and impacts under such climate conditions. Such RCMs are made available through the Coordinated Regional climate Downscaling Experiment (CORDEX). The ensemble of models provides a range of possible future climate changes around the ensemble mean climate change signal. The model outputs however are prone to biases compared to regional observations. A bias correction of these deviations is a crucial step in the impact modelling chain to allow the reproduction of historic conditions of i.e. river discharge. However, the detection and quantification of model biases are highly dependent on the selected regional reference data set. Additionally, in practice due to computational constraints it is usually not feasible to consider the entire ensembles of climate simulations with all members as input for impact models which provide information to support decision-making. Although more and more studies focus on model selection based on the preservation of the climate model spread, a selection based on validity, i.e. the representation of the historic conditions is still a widely applied approach. In this study, several available reference data sets for precipitation are selected to detect the model bias for the reference period 1989 - 2008 over the alpine catchment of the Adige River located in Northern Italy. The reference data sets originate from various sources, such as station data or reanalysis. These data sets are remapped to the common RCM grid at 0.11° resolution and several indicators, such as dry and wet spells, extreme precipitation and general climatology, are calculate to evaluate the capability of the RCMs to produce the historical conditions. The resulting RCM spread is compared against the spread of the reference data set to determine the related uncertainties and detect potential model biases with respect to each reference data set. The RCMs are then ranked based on various statistical measures for each indicator and a score matrix is derived to select a subset of RCMs. We show the impact and importance of the reference data set with respect to the resulting climate change signal on the catchment scale.
A functional model for characterizing long-distance movement behaviour
Buderman, Frances E.; Hooten, Mevin B.; Ivan, Jacob S.; Shenk, Tanya M.
2016-01-01
Advancements in wildlife telemetry techniques have made it possible to collect large data sets of highly accurate animal locations at a fine temporal resolution. These data sets have prompted the development of a number of statistical methodologies for modelling animal movement.Telemetry data sets are often collected for purposes other than fine-scale movement analysis. These data sets may differ substantially from those that are collected with technologies suitable for fine-scale movement modelling and may consist of locations that are irregular in time, are temporally coarse or have large measurement error. These data sets are time-consuming and costly to collect but may still provide valuable information about movement behaviour.We developed a Bayesian movement model that accounts for error from multiple data sources as well as movement behaviour at different temporal scales. The Bayesian framework allows us to calculate derived quantities that describe temporally varying movement behaviour, such as residence time, speed and persistence in direction. The model is flexible, easy to implement and computationally efficient.We apply this model to data from Colorado Canada lynx (Lynx canadensis) and use derived quantities to identify changes in movement behaviour.
Spatiotemporal patterns of terrestrial gross primary production: A review
NASA Astrophysics Data System (ADS)
Anav, Alessandro; Friedlingstein, Pierre; Beer, Christian; Ciais, Philippe; Harper, Anna; Jones, Chris; Murray-Tortarolo, Guillermo; Papale, Dario; Parazoo, Nicholas C.; Peylin, Philippe; Piao, Shilong; Sitch, Stephen; Viovy, Nicolas; Wiltshire, Andy; Zhao, Maosheng
2015-09-01
Great advances have been made in the last decade in quantifying and understanding the spatiotemporal patterns of terrestrial gross primary production (GPP) with ground, atmospheric, and space observations. However, although global GPP estimates exist, each data set relies upon assumptions and none of the available data are based only on measurements. Consequently, there is no consensus on the global total GPP and large uncertainties exist in its benchmarking. The objective of this review is to assess how the different available data sets predict the spatiotemporal patterns of GPP, identify the differences among data sets, and highlight the main advantages/disadvantages of each data set. We compare GPP estimates for the historical period (1990-2009) from two observation-based data sets (Model Tree Ensemble and Moderate Resolution Imaging Spectroradiometer) to coupled carbon-climate models and terrestrial carbon cycle models from the Fifth Climate Model Intercomparison Project and TRENDY projects and to a new hybrid data set (CARBONES). Results show a large range in the mean global GPP estimates. The different data sets broadly agree on GPP seasonal cycle in terms of phasing, while there is still discrepancy on the amplitude. For interannual variability (IAV) and trends, there is a clear separation between the observation-based data that show little IAV and trend, while the process-based models have large GPP variability and significant trends. These results suggest that there is an urgent need to improve observation-based data sets and develop carbon cycle modeling with processes that are currently treated either very simplistically to correctly estimate present GPP and better quantify the future uptake of carbon dioxide by the world's vegetation.
NASA Astrophysics Data System (ADS)
Bennett, D. L.; Brene, N.; Nielsen, H. B.
1987-01-01
The goal of random dynamics is the derivation of the laws of Nature as we know them (standard model) from inessential assumptions. The inessential assumptions made here are expressed as sets of general models at extremely high energies: gauge glass and spacetime foam. Both sets of models lead tentatively to the standard model.
Zarr, Robert R; Heckert, N Alan; Leigh, Stefan D
2014-01-01
Thermal conductivity data acquired previously for the establishment of Standard Reference Material (SRM) 1450, Fibrous Glass Board, as well as subsequent renewals 1450a, 1450b, 1450c, and 1450d, are re-analyzed collectively and as individual data sets. Additional data sets for proto-1450 material lots are also included in the analysis. The data cover 36 years of activity by the National Institute of Standards and Technology (NIST) in developing and providing thermal insulation SRMs, specifically high-density molded fibrous-glass board, to the public. Collectively, the data sets cover two nominal thicknesses of 13 mm and 25 mm, bulk densities from 60 kg·m(-3) to 180 kg·m(-3), and mean temperatures from 100 K to 340 K. The analysis repetitively fits six models to the individual data sets. The most general form of the nested set of multilinear models used is given in the following equation: [Formula: see text]where λ(ρ,T) is the predicted thermal conductivity (W·m(-1)·K(-1)), ρ is the bulk density (kg·m(-3)), T is the mean temperature (K) and ai (for i = 1, 2, … 6) are the regression coefficients. The least squares fit results for each model across all data sets are analyzed using both graphical and analytic techniques. The prevailing generic model for the majority of data sets is the bilinear model in ρ and T. [Formula: see text] One data set supports the inclusion of a cubic temperature term and two data sets with low-temperature data support the inclusion of an exponential term in T to improve the model predictions. Physical interpretations of the model function terms are described. Recommendations for future renewals of SRM 1450 are provided. An Addendum provides historical background on the origin of this SRM and the influence of the SRM on external measurement programs.
Integration of RAM-SCB into the Space Weather Modeling Framework
Welling, Daniel; Toth, Gabor; Jordanova, Vania Koleva; ...
2018-02-07
We present that numerical simulations of the ring current are a challenging endeavor. They require a large set of inputs, including electric and magnetic fields and plasma sheet fluxes. Because the ring current broadly affects the magnetosphere-ionosphere system, the input set is dependent on the ring current region itself. This makes obtaining a set of inputs that are self-consistent with the ring current difficult. To overcome this challenge, researchers have begun coupling ring current models to global models of the magnetosphere-ionosphere system. This paper describes the coupling between the Ring current Atmosphere interaction Model with Self-Consistent Magnetic field (RAM-SCB) tomore » the models within the Space Weather Modeling Framework. Full details on both previously introduced and new coupling mechanisms are defined. Finally, the impact of self-consistently including the ring current on the magnetosphere-ionosphere system is illustrated via a set of example simulations.« less
Developing a Suitable Model for Water Uptake for Biodegradable Polymers Using Small Training Sets.
Valenzuela, Loreto M; Knight, Doyle D; Kohn, Joachim
2016-01-01
Prediction of the dynamic properties of water uptake across polymer libraries can accelerate polymer selection for a specific application. We first built semiempirical models using Artificial Neural Networks and all water uptake data, as individual input. These models give very good correlations (R (2) > 0.78 for test set) but very low accuracy on cross-validation sets (less than 19% of experimental points within experimental error). Instead, using consolidated parameters like equilibrium water uptake a good model is obtained (R (2) = 0.78 for test set), with accurate predictions for 50% of tested polymers. The semiempirical model was applied to the 56-polymer library of L-tyrosine-derived polyarylates, identifying groups of polymers that are likely to satisfy design criteria for water uptake. This research demonstrates that a surrogate modeling effort can reduce the number of polymers that must be synthesized and characterized to identify an appropriate polymer that meets certain performance criteria.
Integration of RAM-SCB into the Space Weather Modeling Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welling, Daniel; Toth, Gabor; Jordanova, Vania Koleva
We present that numerical simulations of the ring current are a challenging endeavor. They require a large set of inputs, including electric and magnetic fields and plasma sheet fluxes. Because the ring current broadly affects the magnetosphere-ionosphere system, the input set is dependent on the ring current region itself. This makes obtaining a set of inputs that are self-consistent with the ring current difficult. To overcome this challenge, researchers have begun coupling ring current models to global models of the magnetosphere-ionosphere system. This paper describes the coupling between the Ring current Atmosphere interaction Model with Self-Consistent Magnetic field (RAM-SCB) tomore » the models within the Space Weather Modeling Framework. Full details on both previously introduced and new coupling mechanisms are defined. Finally, the impact of self-consistently including the ring current on the magnetosphere-ionosphere system is illustrated via a set of example simulations.« less
Learning Instance-Specific Predictive Models
Visweswaran, Shyam; Cooper, Gregory F.
2013-01-01
This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325
2016-01-01
Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644
Daryasafar, Amin; Ahadi, Arash; Kharrat, Riyaz
2014-01-01
Steam distillation as one of the important mechanisms has a great role in oil recovery in thermal methods and so it is important to simulate this process experimentally and theoretically. In this work, the simulation of steam distillation is performed on sixteen sets of crude oil data found in the literature. Artificial intelligence (AI) tools such as artificial neural network (ANN) and also adaptive neurofuzzy interference system (ANFIS) are used in this study as effective methods to simulate the distillate recoveries of these sets of data. Thirteen sets of data were used to train the models and three sets were used to test the models. The developed models are highly compatible with respect to input oil properties and can predict the distillate yield with minimum entry. For showing the performance of the proposed models, simulation of steam distillation is also done using modified Peng-Robinson equation of state. Comparison between the calculated distillates by ANFIS and neural network models and also equation of state-based method indicates that the errors of the ANFIS model for training data and test data sets are lower than those of other methods.
Ahadi, Arash; Kharrat, Riyaz
2014-01-01
Steam distillation as one of the important mechanisms has a great role in oil recovery in thermal methods and so it is important to simulate this process experimentally and theoretically. In this work, the simulation of steam distillation is performed on sixteen sets of crude oil data found in the literature. Artificial intelligence (AI) tools such as artificial neural network (ANN) and also adaptive neurofuzzy interference system (ANFIS) are used in this study as effective methods to simulate the distillate recoveries of these sets of data. Thirteen sets of data were used to train the models and three sets were used to test the models. The developed models are highly compatible with respect to input oil properties and can predict the distillate yield with minimum entry. For showing the performance of the proposed models, simulation of steam distillation is also done using modified Peng-Robinson equation of state. Comparison between the calculated distillates by ANFIS and neural network models and also equation of state-based method indicates that the errors of the ANFIS model for training data and test data sets are lower than those of other methods. PMID:24883365
ERIC Educational Resources Information Center
Prayekti
2017-01-01
This research was aimed at developing printed teaching materials of Atomic Physics PEFI4421 Course using Research and Development (R & D) model; which consisted of three major set of activities. The first set consisted of seven stages, the second set consisted of one stage, and the third set consisted of seven stages. This research study was…
CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals.
Bhhatarai, Barun; Teetz, Wolfram; Liu, Tao; Öberg, Tomas; Jeliazkova, Nina; Kochev, Nikolay; Pukalov, Ognyan; Tetko, Igor V; Kovarich, Simona; Papa, Ester; Gramatica, Paola
2011-03-14
Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.
Boxwala, Aziz A; Kim, Jihoon; Grillo, Janice M; Ohno-Machado, Lucila
2011-01-01
To determine whether statistical and machine-learning methods, when applied to electronic health record (EHR) access data, could help identify suspicious (ie, potentially inappropriate) access to EHRs. From EHR access logs and other organizational data collected over a 2-month period, the authors extracted 26 features likely to be useful in detecting suspicious accesses. Selected events were marked as either suspicious or appropriate by privacy officers, and served as the gold standard set for model evaluation. The authors trained logistic regression (LR) and support vector machine (SVM) models on 10-fold cross-validation sets of 1291 labeled events. The authors evaluated the sensitivity of final models on an external set of 58 events that were identified as truly inappropriate and investigated independently from this study using standard operating procedures. The area under the receiver operating characteristic curve of the models on the whole data set of 1291 events was 0.91 for LR, and 0.95 for SVM. The sensitivity of the baseline model on this set was 0.8. When the final models were evaluated on the set of 58 investigated events, all of which were determined as truly inappropriate, the sensitivity was 0 for the baseline method, 0.76 for LR, and 0.79 for SVM. The LR and SVM models may not generalize because of interinstitutional differences in organizational structures, applications, and workflows. Nevertheless, our approach for constructing the models using statistical and machine-learning techniques can be generalized. An important limitation is the relatively small sample used for the training set due to the effort required for its construction. The results suggest that statistical and machine-learning methods can play an important role in helping privacy officers detect suspicious accesses to EHRs.
Kim, Jihoon; Grillo, Janice M; Ohno-Machado, Lucila
2011-01-01
Objective To determine whether statistical and machine-learning methods, when applied to electronic health record (EHR) access data, could help identify suspicious (ie, potentially inappropriate) access to EHRs. Methods From EHR access logs and other organizational data collected over a 2-month period, the authors extracted 26 features likely to be useful in detecting suspicious accesses. Selected events were marked as either suspicious or appropriate by privacy officers, and served as the gold standard set for model evaluation. The authors trained logistic regression (LR) and support vector machine (SVM) models on 10-fold cross-validation sets of 1291 labeled events. The authors evaluated the sensitivity of final models on an external set of 58 events that were identified as truly inappropriate and investigated independently from this study using standard operating procedures. Results The area under the receiver operating characteristic curve of the models on the whole data set of 1291 events was 0.91 for LR, and 0.95 for SVM. The sensitivity of the baseline model on this set was 0.8. When the final models were evaluated on the set of 58 investigated events, all of which were determined as truly inappropriate, the sensitivity was 0 for the baseline method, 0.76 for LR, and 0.79 for SVM. Limitations The LR and SVM models may not generalize because of interinstitutional differences in organizational structures, applications, and workflows. Nevertheless, our approach for constructing the models using statistical and machine-learning techniques can be generalized. An important limitation is the relatively small sample used for the training set due to the effort required for its construction. Conclusion The results suggest that statistical and machine-learning methods can play an important role in helping privacy officers detect suspicious accesses to EHRs. PMID:21672912
SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Z; Folkert, M; Wang, J
2016-06-15
Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidentialmore » reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.« less
NASA Astrophysics Data System (ADS)
Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten
2017-07-01
Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral
sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral
parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.
Pareto-Optimal Multi-objective Inversion of Geophysical Data
NASA Astrophysics Data System (ADS)
Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham
2018-01-01
In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.
Lim, Cherry; Wannapinij, Prapass; White, Lisa; Day, Nicholas P J; Cooper, Ben S; Peacock, Sharon J; Limmathurotsakul, Direk
2013-01-01
Estimates of the sensitivity and specificity for new diagnostic tests based on evaluation against a known gold standard are imprecise when the accuracy of the gold standard is imperfect. Bayesian latent class models (LCMs) can be helpful under these circumstances, but the necessary analysis requires expertise in computational programming. Here, we describe open-access web-based applications that allow non-experts to apply Bayesian LCMs to their own data sets via a user-friendly interface. Applications for Bayesian LCMs were constructed on a web server using R and WinBUGS programs. The models provided (http://mice.tropmedres.ac) include two Bayesian LCMs: the two-tests in two-population model (Hui and Walter model) and the three-tests in one-population model (Walter and Irwig model). Both models are available with simplified and advanced interfaces. In the former, all settings for Bayesian statistics are fixed as defaults. Users input their data set into a table provided on the webpage. Disease prevalence and accuracy of diagnostic tests are then estimated using the Bayesian LCM, and provided on the web page within a few minutes. With the advanced interfaces, experienced researchers can modify all settings in the models as needed. These settings include correlation among diagnostic test results and prior distributions for all unknown parameters. The web pages provide worked examples with both models using the original data sets presented by Hui and Walter in 1980, and by Walter and Irwig in 1988. We also illustrate the utility of the advanced interface using the Walter and Irwig model on a data set from a recent melioidosis study. The results obtained from the web-based applications were comparable to those published previously. The newly developed web-based applications are open-access and provide an important new resource for researchers worldwide to evaluate new diagnostic tests.
Assessing modelled spatial distributions of ice water path using satellite data
NASA Astrophysics Data System (ADS)
Eliasson, S.; Buehler, S. A.; Milz, M.; Eriksson, P.; John, V. O.
2010-05-01
The climate models used in the IPCC AR4 show large differences in monthly mean cloud ice. The most valuable source of information that can be used to potentially constrain the models is global satellite data. For this, the data sets must be long enough to capture the inter-annual variability of Ice Water Path (IWP). PATMOS-x was used together with ISCCP for the annual cycle evaluation in Fig. 7 while ECHAM-5 was used for the correlation with other models in Table 3. A clear distinction between ice categories in satellite retrievals, as desired from a model point of view, is currently impossible. However, long-term satellite data sets may still be used to indicate the climatology of IWP spatial distribution. We evaluated satellite data sets from CloudSat, PATMOS-x, ISCCP, MODIS and MSPPS in terms of monthly mean IWP, to determine which data sets can be used to evaluate the climate models. IWP data from CloudSat cloud profiling radar provides the most advanced data set on clouds. As CloudSat data are too short to evaluate the model data directly, it was mainly used here to evaluate IWP from the other satellite data sets. ISCCP and MSPPS were shown to have comparatively low IWP values. ISCCP shows particularly low values in the tropics, while MSPPS has particularly low values outside the tropics. MODIS and PATMOS-x were in closest agreement with CloudSat in terms of magnitude and spatial distribution, with MODIS being the best of the two. As PATMOS-x extends over more than 25 years and is in fairly close agreement with CloudSat, it was chosen as the reference data set for the model evaluation. In general there are large discrepancies between the individual climate models, and all of the models show problems in reproducing the observed spatial distribution of cloud-ice. Comparisons consistently showed that ECHAM-5 is the GCM from IPCC AR4 closest to satellite observations.
QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.
Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V
2015-07-27
Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level.
Cederstrand, J.R.; Rea, A.H.
1995-01-01
This document provides a general description of the procedures used to develop the data sets included on this compact disc. This compact disc contains watershed boundaries for Oklahoma, a digital elevation model, and other data sets derived from the digital elevation model. The digital elevation model was produced using the ANUDEM software package, written by Michael Hutchinson and licensed from the Centre for Resource and Environmental Studies at The Australian National University. Elevation data (hypsography) and streams (hydrography) from digital versions of the U.S. Geological Survey 1:100,000-scale topographic maps were used by the ANUDEM package to produce a hydrologically conditioned digital elevation model with a 60-meter cell size. This digital elevation model is well suited for drainage-basin delineation using automated techniques. Additional data sets include flow-direction, flow-accumulation, and shaded-relief grids, all derived from the digital elevation model, and the hydrography data set used in producing the digital elevation model. The watershed boundaries derived from the digital elevation model have been edited to be consistent with contours and streams from the U.S. Geological Survey 1:100,000-scale topographic maps. The watershed data set includes boundaries for 11-digit Hydrologic Unit Codes (watersheds) within Oklahoma, and 8-digit Hydrologic Unit Codes (cataloging units) outside Oklahoma. Cataloging-unit boundaries based on 1:250,000-scale maps outside Oklahoma for the Arkansas, Red, and White River basins are included. The other data sets cover Oklahoma, and where available, portions of 1:100,000-scale quadrangles adjoining Oklahoma.
ERIC Educational Resources Information Center
Burton, Grace M.; Knifong, J. Dan
1983-01-01
Models for division are discussed: counting, repeated subtraction, inverse of multiplication, sets, number line, balance beam, arrays, and cross product of sets. Expressing the remainder using various models is then presented, followed by comments on why all the models should be taught. (MNS)
Global precipitation measurements for validating climate models
NASA Astrophysics Data System (ADS)
Tapiador, F. J.; Navarro, A.; Levizzani, V.; García-Ortega, E.; Huffman, G. J.; Kidd, C.; Kucera, P. A.; Kummerow, C. D.; Masunaga, H.; Petersen, W. A.; Roca, R.; Sánchez, J.-L.; Tao, W.-K.; Turk, F. J.
2017-11-01
The advent of global precipitation data sets with increasing temporal span has made it possible to use them for validating climate models. In order to fulfill the requirement of global coverage, existing products integrate satellite-derived retrievals from many sensors with direct ground observations (gauges, disdrometers, radars), which are used as reference for the satellites. While the resulting product can be deemed as the best-available source of quality validation data, awareness of the limitations of such data sets is important to avoid extracting wrong or unsubstantiated conclusions when assessing climate model abilities. This paper provides guidance on the use of precipitation data sets for climate research, including model validation and verification for improving physical parameterizations. The strengths and limitations of the data sets for climate modeling applications are presented, and a protocol for quality assurance of both observational databases and models is discussed. The paper helps elaborating the recent IPCC AR5 acknowledgment of large observational uncertainties in precipitation observations for climate model validation.
Block rotations, fault domains and crustal deformation in the western US
NASA Technical Reports Server (NTRS)
Nur, Amos
1990-01-01
The aim of the project was to develop a 3D model of crustal deformation by distributed fault sets and to test the model results in the field. In the first part of the project, Nur's 2D model (1986) was generalized to 3D. In Nur's model the frictional strength of rocks and faults of a domain provides a tight constraint on the amount of rotation that a fault set can undergo during block rotation. Domains of fault sets are commonly found in regions where the deformation is distributed across a region. The interaction of each fault set causes the fault bounded blocks to rotate. The work that has been done towards quantifying the rotation of fault sets in a 3D stress field is briefly summarized. In the second part of the project, field studies were carried out in Israel, Nevada and China. These studies combined both paleomagnetic and structural information necessary to test the block rotation model results. In accordance with the model, field studies demonstrate that faults and attending fault bounded blocks slip and rotate away from the direction of maximum compression when deformation is distributed across fault sets. Slip and rotation of fault sets may continue as long as the earth's crustal strength is not exceeded. More optimally oriented faults must form, for subsequent deformation to occur. Eventually the block rotation mechanism may create a complex pattern of intersecting generations of faults.
Neural modeling and functional neuroimaging.
Horwitz, B; Sporns, O
1994-01-01
Two research areas that so far have had little interaction with one another are functional neuroimaging and computational neuroscience. The application of computational models and techniques to the inherently rich data sets generated by "standard" neurophysiological methods has proven useful for interpreting these data sets and for providing predictions and hypotheses for further experiments. We suggest that both theory- and data-driven computational modeling of neuronal systems can help to interpret data generated by functional neuroimaging methods, especially those used with human subjects. In this article, we point out four sets of questions, addressable by computational neuroscientists whose answere would be of value and interest to those who perform functional neuroimaging. The first set consist of determining the neurobiological substrate of the signals measured by functional neuroimaging. The second set concerns developing systems-level models of functional neuroimaging data. The third set of questions involves integrating functional neuroimaging data across modalities, with a particular emphasis on relating electromagnetic with hemodynamic data. The last set asks how one can relate systems-level models to those at the neuronal and neural ensemble levels. We feel that there are ample reasons to link functional neuroimaging and neural modeling, and that combining the results from the two disciplines will result in furthering our understanding of the central nervous system. © 1994 Wiley-Liss, Inc. This Article is a US Goverment work and, as such, is in the public domain in the United State of America. Copyright © 1994 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Nasonova, O. N.; Gusev, Ye. M.; Kovalev, Ye. E.
2009-04-01
Global estimates of the components of terrestrial water balance depend on a technique of estimation and on the global observational data sets used for this purpose. Land surface modelling is an up-to-date and powerful tool for such estimates. However, the results of modelling are affected by the quality of both a model and input information (including meteorological forcing data and model parameters). The latter is based on available global data sets containing meteorological data, land-use information, and soil and vegetation characteristics. Now there are a lot of global data sets, which differ in spatial and temporal resolution, as well as in accuracy and reliability. Evidently, uncertainties in global data sets will influence the results of model simulations, but to which extent? The present work is an attempt to investigate this issue. The work is based on the land surface model SWAP (Soil Water - Atmosphere - Plants) and global 1-degree data sets on meteorological forcing data and the land surface parameters, provided within the framework of the Second Global Soil Wetness Project (GSWP-2). The 3-hourly near-surface meteorological data (for the period from 1 July 1982 to 31 December 1995) are based on reanalyses and gridded observational data used in the International Satellite Land-Surface Climatology Project (ISLSCP) Initiative II. Following the GSWP-2 strategy, we used a number of alternative global forcing data sets to perform different sensitivity experiments (with six alternative versions of precipitation, four versions of radiation, two pure reanalysis products and two fully hybridized products of meteorological data). To reveal the influence of model parameters on simulations, in addition to GSWP-2 parameter data sets, we produced two alternative global data sets with soil parameters on the basis of their relationships with the content of clay and sand in a soil. After this the sensitivity experiments with three different sets of parameters were performed. As a result, 16 variants of global annual estimates of water balance components were obtained. Application of alternative data sets on radiation, precipitation, and soil parameters allowed us to reveal the influence of uncertainties in input data on global estimates of water balance components.
Multi-model ensemble hydrologic prediction using Bayesian model averaging
NASA Astrophysics Data System (ADS)
Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh
2007-05-01
Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.
Iterative random vs. Kennard-Stone sampling for IR spectrum-based classification task using PLS2-DA
NASA Astrophysics Data System (ADS)
Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz
2018-04-01
External testing (ET) is preferred over auto-prediction (AP) or k-fold-cross-validation in estimating more realistic predictive ability of a statistical model. With IR spectra, Kennard-stone (KS) sampling algorithm is often used to split the data into training and test sets, i.e. respectively for model construction and for model testing. On the other hand, iterative random sampling (IRS) has not been the favored choice though it is theoretically more likely to produce reliable estimation. The aim of this preliminary work is to compare performances of KS and IRS in sampling a representative training set from an attenuated total reflectance - Fourier transform infrared spectral dataset (of four varieties of blue gel pen inks) for PLS2-DA modeling. The `best' performance achievable from the dataset is estimated with AP on the full dataset (APF, error). Both IRS (n = 200) and KS were used to split the dataset in the ratio of 7:3. The classic decision rule (i.e. maximum value-based) is employed for new sample prediction via partial least squares - discriminant analysis (PLS2-DA). Error rate of each model was estimated repeatedly via: (a) AP on full data (APF, error); (b) AP on training set (APS, error); and (c) ET on the respective test set (ETS, error). A good PLS2-DA model is expected to produce APS, error and EVS, error that is similar to the APF, error. Bearing that in mind, the similarities between (a) APS, error vs. APF, error; (b) ETS, error vs. APF, error and; (c) APS, error vs. ETS, error were evaluated using correlation tests (i.e. Pearson and Spearman's rank test), using series of PLS2-DA models computed from KS-set and IRS-set, respectively. Overall, models constructed from IRS-set exhibits more similarities between the internal and external error rates than the respective KS-set, i.e. less risk of overfitting. In conclusion, IRS is more reliable than KS in sampling representative training set.
For QSAR and QSPR modeling of biological and physicochemical properties, estimating the accuracy of predictions is a critical problem. The “distance to model” (DM) can be defined as a metric that defines the similarity between the training set molecules and the test set compound ...
A developmental model of recreation choice behavior
Daniel R. Williams
1985-01-01
Recreation choices are viewed as including, at least implicitly, a selection of an activity, a setting, and a set of companions. With development these three elements become increasingly differentiated from one another. The model is tested by examining the perceived similarities among a set of 15 recreation choices depicted in color slides.
ERIC Educational Resources Information Center
Lin, Shinyi; Chen, Yu-Chuan
2013-01-01
In integrating theoretical perspectives of self-determination and goal-setting, this study proposes a conceptual model with moderating and mediating effects exploring gender issue in autonomy-supportive learning in higher education as research context. In the proposed model, goal-setting attributes, i.e., individual determinants, social…
Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F
2015-01-01
Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.
Ozone reference models for the middle atmosphere (new CIRA)
NASA Technical Reports Server (NTRS)
Keating, G. M.; Pitts, M. C.; Young, D. F.
1989-01-01
Models of ozone vertical structure were generated that were based on multiple data sets from satellites. The very good absolute accuracy of the individual data sets allowed the data to be directly combined to generate these models. The data used for generation of these models are from some of the most recent satellite measurements over the period 1978 to 1983. A discussion is provided of validation and error analyses of these data sets. Also, inconsistencies in data sets brought about by temporal variations or other factors are indicated. The models cover the pressure range from from 20 to 0.003 mb (25 to 90 km). The models for pressures less than 0.5 mb represent only the day side and are only provisional since there was limited longitudinal coverage at these levels. The models start near 25 km in accord with previous COSPAR international reference atmosphere (CIRA) models. Models are also provided of ozone mixing ratio as a function of height. The monthly standard deviation and interannual variations relative to zonal means are also provided. In addition to the models of monthly latitudinal variations in vertical structure based on satellite measurements, monthly models of total column ozone and its characteristic variability as a function of latitude based on four years of Nimbus 7 measurements, models of the relationship between vertical structure and total column ozone, and a midlatitude annual mean model are incorporated in this set of ozone reference atmospheres. Various systematic variations are discussed including the annual, semiannual, and quasibiennial oscillations, and diurnal, longitudinal, and response to solar activity variations.
A Comparison of Three Approaches to Model Human Behavior
NASA Astrophysics Data System (ADS)
Palmius, Joel; Persson-Slumpi, Thomas
2010-11-01
One way of studying social processes is through the use of simulations. The use of simulations for this purpose has been established as its own field, social simulations, and has been used for studying a variety of phenomena. A simulation of a social setting can serve as an aid for thinking about that social setting, and for experimenting with different parameters and studying the outcomes caused by them. When using the simulation as an aid for thinking and experimenting, the chosen simulation approach will implicitly steer the simulationist towards thinking in a certain fashion in order to fit the model. To study the implications of model choice on the understanding of a setting where human anticipation comes into play, a simulation scenario of a coffee room was constructed using three different simulation approaches: Cellular Automata, Systems Dynamics and Agent-based modeling. The practical implementations of the models were done in three different simulation packages: Stella for Systems Dynamic, CaFun for Cellular automata and SesAM for Agent-based modeling. The models were evaluated both using Randers' criteria for model evaluation, and through introspection where the authors reflected upon how their understanding of the scenario was steered through the model choice. Further the software used for implementing the simulation models was evaluated, and practical considerations for the choice of software package are listed. It is concluded that the models have very different strengths. The Agent-based modeling approach offers the most intuitive support for thinking about and modeling a social setting where the behavior of the individual is in focus. The Systems Dynamics model would be preferable in situations where populations and large groups would be studied as wholes, but where individual behavior is of less concern. The Cellular Automata models would be preferable where processes need to be studied from the basis of a small set of very simple rules. It is further concluded that in most social simulation settings the Agent-based modeling approach would be the probable choice. This since the other models does not offer much in the way of supporting the modeling of the anticipatory behavior of humans acting in an organization.
VizieR Online Data Catalog: A catalog of exoplanet physical parameters (Foreman-Mackey+, 2014)
NASA Astrophysics Data System (ADS)
Foreman-Mackey, D.; Hogg, D. W.; Morton, T. D.
2017-05-01
The first ingredient for any probabilistic inference is a likelihood function, a description of the probability of observing a specific data set given a set of model parameters. In this particular project, the data set is a catalog of exoplanet measurements and the model parameters are the values that set the shape and normalization of the occurrence rate density. (2 data files).
Learning Setting-Generalized Activity Models for Smart Spaces
Cook, Diane J.
2011-01-01
The data mining and pervasive computing technologies found in smart homes offer unprecedented opportunities for providing context-aware services, including health monitoring and assistance to individuals experiencing difficulties living independently at home. In order to provide these services, smart environment algorithms need to recognize and track activities that people normally perform as part of their daily routines. However, activity recognition has typically involved gathering and labeling large amounts of data in each setting to learn a model for activities in that setting. We hypothesize that generalized models can be learned for common activities that span multiple environment settings and resident types. We describe our approach to learning these models and demonstrate the approach using eleven CASAS datasets collected in seven environments. PMID:21461133
Nowakowska, Marzena
2017-04-01
The development of the Bayesian logistic regression model classifying the road accident severity is discussed. The already exploited informative priors (method of moments, maximum likelihood estimation, and two-stage Bayesian updating), along with the original idea of a Boot prior proposal, are investigated when no expert opinion has been available. In addition, two possible approaches to updating the priors, in the form of unbalanced and balanced training data sets, are presented. The obtained logistic Bayesian models are assessed on the basis of a deviance information criterion (DIC), highest probability density (HPD) intervals, and coefficients of variation estimated for the model parameters. The verification of the model accuracy has been based on sensitivity, specificity and the harmonic mean of sensitivity and specificity, all calculated from a test data set. The models obtained from the balanced training data set have a better classification quality than the ones obtained from the unbalanced training data set. The two-stage Bayesian updating prior model and the Boot prior model, both identified with the use of the balanced training data set, outperform the non-informative, method of moments, and maximum likelihood estimation prior models. It is important to note that one should be careful when interpreting the parameters since different priors can lead to different models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Precise determination of time to reach viral load set point after acute HIV-1 infection.
Huang, Xiaojie; Chen, Hui; Li, Wei; Li, Haiying; Jin, Xia; Perelson, Alan S; Fox, Zoe; Zhang, Tong; Xu, Xiaoning; Wu, Hao
2012-12-01
The HIV viral load set point has long been used as a prognostic marker of disease progression and more recently as an end-point parameter in HIV vaccine clinical trials. The definition of set point, however, is variable. Moreover, the earliest time at which the set point is reached after the onset of infection has never been clearly defined. In this study, we obtained sequential plasma viral load data from 60 acutely HIV-infected Chinese patients among a cohort of men who have sex with men, mathematically determined viral load set point levels, and estimated time to attain set point after infection. We also compared the results derived from our models and that obtained from an empirical method. With novel uncomplicated mathematic model, we discovered that set points may vary from 21 to 119 days dependent on the patients' initial viral load trajectory. The viral load set points were 4.28 ± 0.86 and 4.25 ± 0.87 log10 copies per milliliter (P = 0.08), respectively, as determined by our model and an empirical method, suggesting an excellent agreement between the old and new methods. We provide a novel method to estimate viral load set point at the very early stage of HIV infection. Application of this model can accurately and reliably determine the set point, thus providing a new tool for physicians to better monitor early intervention strategies in acutely infected patients and scientists to rationally design preventative vaccine studies.
Complex versus simple models: ion-channel cardiac toxicity prediction.
Mistry, Hitesh B
2018-01-01
There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
Rácz, A; Bajusz, D; Héberger, K
2015-01-01
Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.
A Regionalization Approach to select the final watershed parameter set among the Pareto solutions
NASA Astrophysics Data System (ADS)
Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.
2017-12-01
The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.
Chemical structure-based predictive model for methanogenic anaerobic biodegradation potential.
Meylan, William; Boethling, Robert; Aronson, Dallas; Howard, Philip; Tunkel, Jay
2007-09-01
Many screening-level models exist for predicting aerobic biodegradation potential from chemical structure, but anaerobic biodegradation generally has been ignored by modelers. We used a fragment contribution approach to develop a model for predicting biodegradation potential under methanogenic anaerobic conditions. The new model has 37 fragments (substructures) and classifies a substance as either fast or slow, relative to the potential to be biodegraded in the "serum bottle" anaerobic biodegradation screening test (Organization for Economic Cooperation and Development Guideline 311). The model correctly classified 90, 77, and 91% of the chemicals in the training set (n = 169) and two independent validation sets (n = 35 and 23), respectively. Accuracy of predictions of fast and slow degradation was equal for training-set chemicals, but fast-degradation predictions were less accurate than slow-degradation predictions for the validation sets. Analysis of the signs of the fragment coefficients for this and the other (aerobic) Biowin models suggests that in the context of simple group contribution models, the majority of positive and negative structural influences on ultimate degradation are the same for aerobic and methanogenic anaerobic biodegradation.
Qin, Zijian; Wang, Maolin; Yan, Aixia
2017-07-01
In this study, quantitative structure-activity relationship (QSAR) models using various descriptor sets and training/test set selection methods were explored to predict the bioactivity of hepatitis C virus (HCV) NS3/4A protease inhibitors by using a multiple linear regression (MLR) and a support vector machine (SVM) method. 512 HCV NS3/4A protease inhibitors and their IC 50 values which were determined by the same FRET assay were collected from the reported literature to build a dataset. All the inhibitors were represented with selected nine global and 12 2D property-weighted autocorrelation descriptors calculated from the program CORINA Symphony. The dataset was divided into a training set and a test set by a random and a Kohonen's self-organizing map (SOM) method. The correlation coefficients (r 2 ) of training sets and test sets were 0.75 and 0.72 for the best MLR model, 0.87 and 0.85 for the best SVM model, respectively. In addition, a series of sub-dataset models were also developed. The performances of all the best sub-dataset models were better than those of the whole dataset models. We believe that the combination of the best sub- and whole dataset SVM models can be used as reliable lead designing tools for new NS3/4A protease inhibitors scaffolds in a drug discovery pipeline. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reference set design for relational modeling of fuzzy systems
NASA Astrophysics Data System (ADS)
Lapohos, Tibor; Buchal, Ralph O.
1994-10-01
One of the keys to the successful relational modeling of fuzzy systems is the proper design of fuzzy reference sets. This has been discussed throughout the literature. In the frame of modeling a stochastic system, we analyze the problem numerically. First, we briefly describe the relational model and present the performance of the modeling in the most trivial case: the reference sets are triangle shaped. Next, we present a known fuzzy reference set generator algorithm (FRSGA) which is based on the fuzzy c-means (Fc-M) clustering algorithm. In the second section of this chapter we improve the previous FRSGA by adding a constraint to the Fc-M algorithm (modified Fc-M or MFc-M): two cluster centers are forced to coincide with the domain limits. This is needed to obtain properly shaped extreme linguistic reference values. We apply this algorithm to uniformly discretized domains of the variables involved. The fuzziness of the reference sets produced by both Fc-M and MFc-M is determined by a parameter, which in our experiments is modified iteratively. Each time, a new model is created and its performance analyzed. For certain algorithm parameter values both of these two algorithms have shortcomings. To eliminate the drawbacks of these two approaches, we develop a completely new generator algorithm for reference sets which we call Polyline. This algorithm and its performance are described in the last section. In all three cases, the modeling is performed for a variety of operators used in the inference engine and two defuzzification methods. Therefore our results depend neither on the system model order nor the experimental setup.
NASA Astrophysics Data System (ADS)
Alzubaidi, Mohammad; Balasubramanian, Vineeth; Patel, Ameet; Panchanathan, Sethuraman; Black, John A., Jr.
2012-03-01
Inductive learning refers to machine learning algorithms that learn a model from a set of training data instances. Any test instance is then classified by comparing it to the learned model. When the set of training instances lend themselves well to modeling, the use of a model substantially reduces the computation cost of classification. However, some training data sets are complex, and do not lend themselves well to modeling. Transductive learning refers to machine learning algorithms that classify test instances by comparing them to all of the training instances, without creating an explicit model. This can produce better classification performance, but at a much higher computational cost. Medical images vary greatly across human populations, constituting a data set that does not lend itself well to modeling. Our previous work showed that the wide variations seen across training sets of "normal" chest radiographs make it difficult to successfully classify test radiographs with an inductive (modeling) approach, and that a transductive approach leads to much better performance in detecting atypical regions. The problem with the transductive approach is its high computational cost. This paper develops and demonstrates a novel semi-transductive framework that can address the unique challenges of atypicality detection in chest radiographs. The proposed framework combines the superior performance of transductive methods with the reduced computational cost of inductive methods. Our results show that the proposed semitransductive approach provides both effective and efficient detection of atypical regions within a set of chest radiographs previously labeled by Mayo Clinic expert thoracic radiologists.
The Dynamics of Phonological Planning
ERIC Educational Resources Information Center
Roon, Kevin D.
2013-01-01
This dissertation proposes a dynamical computational model of the timecourse of phonological parameter setting. In the model, phonological representations embrace phonetic detail, with phonetic parameters represented as activation fields that evolve over time and determine the specific parameter settings of a planned utterance. Existing models of…
Modeling Zone-3 Protection with Generic Relay Models for Dynamic Contingency Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Qiuhua; Vyakaranam, Bharat GNVSR; Diao, Ruisheng
This paper presents a cohesive approach for calculating and coordinating the settings of multiple zone-3 protections for dynamic contingency analysis. The zone-3 protections are represented by generic distance relay models. A two-step approach for determining zone-3 relay settings is proposed. The first step is to calculate settings, particularly, the reach, of each zone-3 relay individually by iteratively running line open-end fault short circuit analysis; the blinder is also employed and properly set to meet the industry standard under extreme loading conditions. The second step is to systematically coordinate the protection settings of the zone-3 relays. The main objective of thismore » coordination step is to address the over-reaching issues. We have developed a tool to automate the proposed approach and generate the settings of all distance relays in a PSS/E dyr format file. The calculated zone-3 settings have been tested on a modified IEEE 300 system using a dynamic contingency analysis tool (DCAT).« less
LexValueSets: An Approach for Context-Driven Value Sets Extraction
Pathak, Jyotishman; Jiang, Guoqian; Dwarkanath, Sridhar O.; Buntrock, James D.; Chute, Christopher G.
2008-01-01
The ability to model, share and re-use value sets across multiple medical information systems is an important requirement. However, generating value sets semi-automatically from a terminology service is still an unresolved issue, in part due to the lack of linkage to clinical context patterns that provide the constraints in defining a concept domain and invocation of value sets extraction. Towards this goal, we develop and evaluate an approach for context-driven automatic value sets extraction based on a formal terminology model. The crux of the technique is to identify and define the context patterns from various domains of discourse and leverage them for value set extraction using two complementary ideas based on (i) local terms provided by the Subject Matter Experts (extensional) and (ii) semantic definition of the concepts in coding schemes (intensional). A prototype was implemented based on SNOMED CT rendered in the LexGrid terminology model and a preliminary evaluation is presented. PMID:18998955
Evaluating mallard adaptive management models with time series
Conn, P.B.; Kendall, W.L.
2004-01-01
Wildlife practitioners concerned with midcontinent mallard (Anas platyrhynchos) management in the United States have instituted a system of adaptive harvest management (AHM) as an objective format for setting harvest regulations. Under the AHM paradigm, predictions from a set of models that reflect key uncertainties about processes underlying population dynamics are used in coordination with optimization software to determine an optimal set of harvest decisions. Managers use comparisons of the predictive abilities of these models to gauge the relative truth of different hypotheses about density-dependent recruitment and survival, with better-predicting models giving more weight to the determination of harvest regulations. We tested the effectiveness of this strategy by examining convergence rates of 'predictor' models when the true model for population dynamics was known a priori. We generated time series for cases when the a priori model was 1 of the predictor models as well as for several cases when the a priori model was not in the model set. We further examined the addition of different levels of uncertainty into the variance structure of predictor models, reflecting different levels of confidence about estimated parameters. We showed that in certain situations, the model-selection process favors a predictor model that incorporates the hypotheses of additive harvest mortality and weakly density-dependent recruitment, even when the model is not used to generate data. Higher levels of predictor model variance led to decreased rates of convergence to the model that generated the data, but model weight trajectories were in general more stable. We suggest that predictive models should incorporate all sources of uncertainty about estimated parameters, that the variance structure should be similar for all predictor models, and that models with different functional forms for population dynamics should be considered for inclusion in predictor model! sets. All of these suggestions should help lower the probability of erroneous learning in mallard ABM and adaptive management in general.
Setting Priorities: A Handbook of Alternative Techniques.
ERIC Educational Resources Information Center
Price, Nelson C.
Six models for setting priorities are presented in a workbook format with exercises for evaluating or practicing five techniques. In the San Mateo model one sets priorities, clarifies priority purpose, lists items, determines criteria, lists items and criteria on a rating sheet, studies all information on items, rates each item, tallies results,…
ERIC Educational Resources Information Center
Dalton, William Edward
Described is a project designed to make government lessons and economics more appealing to sixth-grade students by having them set up and run a model city. General preparation procedures and set-up of the project, specific lesson plans, additional activities, and project evaluation are examined. An actual 3-dimensional model city was set up on…
NASA Astrophysics Data System (ADS)
Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.
We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
NASA Astrophysics Data System (ADS)
Sun, Hao; Wang, Cheng; Wang, Boliang
2011-02-01
We present a hybrid generative-discriminative learning method for human action recognition from video sequences. Our model combines a bag-of-words component with supervised latent topic models. A video sequence is represented as a collection of spatiotemporal words by extracting space-time interest points and describing these points using both shape and motion cues. The supervised latent Dirichlet allocation (sLDA) topic model, which employs discriminative learning using labeled data under a generative framework, is introduced to discover the latent topic structure that is most relevant to action categorization. The proposed algorithm retains most of the desirable properties of generative learning while increasing the classification performance though a discriminative setting. It has also been extended to exploit both labeled data and unlabeled data to learn human actions under a unified framework. We test our algorithm on three challenging data sets: the KTH human motion data set, the Weizmann human action data set, and a ballet data set. Our results are either comparable to or significantly better than previously published results on these data sets and reflect the promise of hybrid generative-discriminative learning approaches.
A global data set of soil particle size properties
NASA Technical Reports Server (NTRS)
Webb, Robert S.; Rosenzweig, Cynthia E.; Levine, Elissa R.
1991-01-01
A standardized global data set of soil horizon thicknesses and textures (particle size distributions) was compiled. This data set will be used by the improved ground hydrology parameterization designed for the Goddard Institute for Space Studies General Circulation Model (GISS GCM) Model 3. The data set specifies the top and bottom depths and the percent abundance of sand, silt, and clay of individual soil horizons in each of the 106 soil types cataloged for nine continental divisions. When combined with the World Soil Data File, the result is a global data set of variations in physical properties throughout the soil profile. These properties are important in the determination of water storage in individual soil horizons and exchange of water with the lower atmosphere. The incorporation of this data set into the GISS GCM should improve model performance by including more realistic variability in land-surface properties.
Estimating Single-Event Logic Cross Sections in Advanced Technologies
NASA Astrophysics Data System (ADS)
Harrington, R. C.; Kauppila, J. S.; Warren, K. M.; Chen, Y. P.; Maharrey, J. A.; Haeffner, T. D.; Loveless, T. D.; Bhuva, B. L.; Bounasser, M.; Lilja, K.; Massengill, L. W.
2017-08-01
Reliable estimation of logic single-event upset (SEU) cross section is becoming increasingly important for predicting the overall soft error rate. As technology scales and single-event transient (SET) pulse widths shrink to widths on the order of the setup-and-hold time of flip-flops, the probability of latching an SET as an SEU must be reevaluated. In this paper, previous assumptions about the relationship of SET pulsewidth to the probability of latching an SET are reconsidered and a model for transient latching probability has been developed for advanced technologies. A method using the improved transient latching probability and SET data is used to predict logic SEU cross section. The presented model has been used to estimate combinational logic SEU cross sections in 32-nm partially depleted silicon-on-insulator (SOI) technology given experimental heavy-ion SET data. Experimental SEU data show good agreement with the model presented in this paper.
NASA Astrophysics Data System (ADS)
Tsougos, Ioannis; Mavroidis, Panayiotis; Theodorou, Kyriaki; Rajala, J.; Pitkänen, M. A.; Holli, K.; Ojala, A. T.; Hyödynmaa, S.; Järvenpää, Ritva; Lind, Bengt K.; Kappas, Constantin
2006-02-01
The choice of the appropriate model and parameter set in determining the relation between the incidence of radiation pneumonitis and dose distribution in the lung is of great importance, especially in the case of breast radiotherapy where the observed incidence is fairly low. From our previous study based on 150 breast cancer patients, where the fits of dose-volume models to clinical data were estimated (Tsougos et al 2005 Evaluation of dose-response models and parameters predicting radiation induced pneumonitis using clinical data from breast cancer radiotherapy Phys. Med. Biol. 50 3535-54), one could get the impression that the relative seriality is significantly better than the LKB NTCP model. However, the estimation of the different NTCP models was based on their goodness-of-fit on clinical data, using various sets of published parameters from other groups, and this fact may provisionally justify the results. Hence, we sought to investigate further the LKB model, by applying different published parameter sets for the very same group of patients, in order to be able to compare the results. It was shown that, depending on the parameter set applied, the LKB model is able to predict the incidence of radiation pneumonitis with acceptable accuracy, especially when implemented on a sub-group of patients (120) receiving \\bar{\\bar{D}}|EUD higher than 8 Gy. In conclusion, the goodness-of-fit of a certain radiobiological model on a given clinical case is closely related to the selection of the proper scoring criteria and parameter set as well as to the compatibility of the clinical case from which the data were derived.
NASA Astrophysics Data System (ADS)
Hulot, G.; Khokhlov, A.
2007-12-01
We recently introduced a method to rigorously test the statistical compatibility of combined time-averaged (TAF) and paleosecular variation (PSV) field models against any lava flow paleomagnetic database (Khokhlov et al., 2001, 2006). Applying this method to test (TAF+PSV) models against synthetic data produced from those shows that the method is very efficient at discriminating models, and very sensitive, provided those data errors are properly taken into account. This prompted us to test a variety of published combined (TAF+PSV) models against a test Bruhnes stable polarity data set extracted from the Quidelleur et al. (1994) data base. Not surprisingly, ignoring data errors leads all models to be rejected. But taking data errors into account leads to the stimulating conclusion that at least one (TAF+PSV) model appears to be compatible with the selected data set, this model being purely axisymmetric. This result shows that in practice also, and with the data bases currently available, the method can discriminate various candidate models and decide which actually best fits a given data set. But it also shows that likely non-zonal signatures of non-homogeneous boundary conditions imposed by the mantle are difficult to identify as statistically robust from paleomagnetic directional data sets. In the present paper, we will discuss the possibility that such signatures could eventually be identified as robust with the help of more recent data sets (such as the one put together under the collaborative "TAFI" effort, see e.g. Johnson et al. abstract #GP21A-0013, AGU Fall Meeting, 2005) or by taking additional information into account (such as the possible coincidence of non-zonal time-averaged field patterns with analogous patterns in the modern field).
Are there two processes in reasoning? The dimensionality of inductive and deductive inferences.
Stephens, Rachel G; Dunn, John C; Hayes, Brett K
2018-03-01
Single-process accounts of reasoning propose that the same cognitive mechanisms underlie inductive and deductive inferences. In contrast, dual-process accounts propose that these inferences depend upon 2 qualitatively different mechanisms. To distinguish between these accounts, we derived a set of single-process and dual-process models based on an overarching signal detection framework. We then used signed difference analysis to test each model against data from an argument evaluation task, in which induction and deduction judgments are elicited for sets of valid and invalid arguments. Three data sets were analyzed: data from Singmann and Klauer (2011), a database of argument evaluation studies, and the results of an experiment designed to test model predictions. Of the large set of testable models, we found that almost all could be rejected, including all 2-dimensional models. The only testable model able to account for all 3 data sets was a model with 1 dimension of argument strength and independent decision criteria for induction and deduction judgments. We conclude that despite the popularity of dual-process accounts, current results from the argument evaluation task are best explained by a single-process account that incorporates separate decision thresholds for inductive and deductive inferences. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
MetaboTools: A comprehensive toolbox for analysis of genome-scale metabolic models
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
2016-08-03
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
Barycentric parameterizations for isotropic BRDFs.
Stark, Michael M; Arvo, James; Smits, Brian
2005-01-01
A bidirectional reflectance distribution function (BRDF) is often expressed as a function of four real variables: two spherical coordinates in each of the the "incoming" and "outgoing" directions. However, many BRDFs reduce to functions of fewer variables. For example, isotropic reflection can be represented by a function of three variables. Some BRDF models can be reduced further. In this paper, we introduce new sets of coordinates which we use to reduce the dimensionality of several well-known analytic BRDFs as well as empirically measured BRDF data. The proposed coordinate systems are barycentric with respect to a triangular support with a direct physical interpretation. One coordinate set is based on the BRDF model proposed by Lafortune. Another set, based on a model of Ward, is associated with the "halfway" vector common in analytical BRDF formulas. Through these coordinate sets we establish lower bounds on the approximation error inherent in the models on which they are based. We present a third set of coordinates, not based on any analytical model, that performs well in approximating measured data. Finally, our proposed variables suggest novel ways of constructing and visualizing BRDFs.
Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.
Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira
2016-01-01
Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.
Heating and dynamics of two flare loop systems observed by AIA and EIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y.; Ding, M. D.; Qiu, J., E-mail: yingli@nju.edu.cn
2014-02-01
We investigate heating and evolution of flare loops in a C4.7 two-ribbon flare on 2011 February 13. From Solar Dynamics Observatory/Atmospheric Imaging Assembly (AIA) imaging observations, we can identify two sets of loops. Hinode/EUV Imaging Spectrometer (EIS) spectroscopic observations reveal blueshifts at the feet of both sets of loops. The evolution and dynamics of the two sets are quite different. The first set of loops exhibits blueshifts for about 25 minutes followed by redshifts, while the second set shows stronger blueshifts, which are maintained for about one hour. The UV 1600 observation by AIA also shows that the feet ofmore » the second set of loops brighten twice. These suggest that continuous heating may be present in the second set of loops. We use spatially resolved UV light curves to infer heating rates in the few tens of individual loops comprising the two loop systems. With these heating rates, we then compute plasma evolution in these loops with the 'enthalpy-based thermal evolution of loops' model. The results show that, for the first set of loops, the synthetic EUV light curves from the model compare favorably with the observed light curves in six AIA channels and eight EIS spectral lines, and the computed mean enthalpy flow velocities also agree with the Doppler shift measurements by EIS. For the second set of loops modeled with twice-heating, there are some discrepancies between modeled and observed EUV light curves in low-temperature bands, and the model does not fully produce the prolonged blueshift signatures as observed. We discuss possible causes for the discrepancies.« less
A characterization of linearly repetitive cut and project sets
NASA Astrophysics Data System (ADS)
Haynes, Alan; Koivusalo, Henna; Walton, James
2018-02-01
For the development of a mathematical theory which can be used to rigorously investigate physical properties of quasicrystals, it is necessary to understand regularity of patterns in special classes of aperiodic point sets in Euclidean space. In one dimension, prototypical mathematical models for quasicrystals are provided by Sturmian sequences and by point sets generated by substitution rules. Regularity properties of such sets are well understood, thanks mostly to well known results by Morse and Hedlund, and physicists have used this understanding to study one dimensional random Schrödinger operators and lattice gas models. A key fact which plays an important role in these problems is the existence of a subadditive ergodic theorem, which is guaranteed when the corresponding point set is linearly repetitive. In this paper we extend the one-dimensional model to cut and project sets, which generalize Sturmian sequences in higher dimensions, and which are frequently used in mathematical and physical literature as models for higher dimensional quasicrystals. By using a combination of algebraic, geometric, and dynamical techniques, together with input from higher dimensional Diophantine approximation, we give a complete characterization of all linearly repetitive cut and project sets with cubical windows. We also prove that these are precisely the collection of such sets which satisfy subadditive ergodic theorems. The results are explicit enough to allow us to apply them to known classical models, and to construct linearly repetitive cut and project sets in all pairs of dimensions and codimensions in which they exist. Research supported by EPSRC grants EP/L001462, EP/J00149X, EP/M023540. HK also gratefully acknowledges the support of the Osk. Huttunen foundation.
NASA Astrophysics Data System (ADS)
Laiti, L.; Mallucci, S.; Piccolroaz, S.; Bellin, A.; Zardi, D.; Fiori, A.; Nikulin, G.; Majone, B.
2018-03-01
Assessing the accuracy of gridded climate data sets is highly relevant to climate change impact studies, since evaluation, bias correction, and statistical downscaling of climate models commonly use these products as reference. Among all impact studies those addressing hydrological fluxes are the most affected by errors and biases plaguing these data. This paper introduces a framework, coined Hydrological Coherence Test (HyCoT), for assessing the hydrological coherence of gridded data sets with hydrological observations. HyCoT provides a framework for excluding meteorological forcing data sets not complying with observations, as function of the particular goal at hand. The proposed methodology allows falsifying the hypothesis that a given data set is coherent with hydrological observations on the basis of the performance of hydrological modeling measured by a metric selected by the modeler. HyCoT is demonstrated in the Adige catchment (southeastern Alps, Italy) for streamflow analysis, using a distributed hydrological model. The comparison covers the period 1989-2008 and includes five gridded daily meteorological data sets: E-OBS, MSWEP, MESAN, APGD, and ADIGE. The analysis highlights that APGD and ADIGE, the data sets with highest effective resolution, display similar spatiotemporal precipitation patterns and produce the largest hydrological efficiency indices. Lower performances are observed for E-OBS, MESAN, and MSWEP, especially in small catchments. HyCoT reveals deficiencies in the representation of spatiotemporal patterns of gridded climate data sets, which cannot be corrected by simply rescaling the meteorological forcing fields, as often done in bias correction of climate model outputs. We recommend this framework to assess the hydrological coherence of gridded data sets to be used in large-scale hydroclimatic studies.
A Model Evaluation Data Set for the Tropical ARM Sites
Jakob, Christian
2008-01-15
This data set has been derived from various ARM and external data sources with the main aim of providing modelers easy access to quality controlled data for model evaluation. The data set contains highly aggregated (in time) data from a number of sources at the tropical ARM sites at Manus and Nauru. It spans the years of 1999 and 2000. The data set contains information on downward surface radiation; surface meteorology, including precipitation; atmospheric water vapor and cloud liquid water content; hydrometeor cover as a function of height; and cloud cover, cloud optical thickness and cloud top pressure information provided by the International Satellite Cloud Climatology Project (ISCCP).
Assessing Discriminative Performance at External Validation of Clinical Prediction Models
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.
2016-01-01
Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753
Assessing Discriminative Performance at External Validation of Clinical Prediction Models.
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W
2016-01-01
External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.
Large Eddy Simulation of Sound Generation by Turbulent Reacting and Nonreacting Shear Flows
NASA Astrophysics Data System (ADS)
Najafi-Yazdi, Alireza
The objective of the present study was to investigate the mechanisms of sound generation by subsonic jets. Large eddy simulations were performed along with bandpass filtering of the flow and sound in order to gain further insight into the pole of coherent structures in subsonic jet noise generation. A sixth-order compact scheme was used for spatial discretization of the fully compressible Navier-Stokes equations. Time integration was performed through the use of the standard fourth-order, explicit Runge-Kutta scheme. An implicit low dispersion, low dissipation Runge-Kutta (ILDDRK) method was developed and implemented for simulations involving sources of stiffness such as flows near solid boundaries, or combustion. A surface integral acoustic analogy formulation, called Formulation 1C, was developed for farfield sound pressure calculations. Formulation 1C was derived based on the convective wave equation in order to take into account the presence of a mean flow. The formulation was derived to be easy to implement as a numerical post-processing tool for CFD codes. Sound radiation from an unheated, Mach 0.9 jet at Reynolds number 400, 000 was considered. The effect of mesh size on the accuracy of the nearfield flow and farfield sound results was studied. It was observed that insufficient grid resolution in the shear layer results in unphysical laminar vortex pairing, and increased sound pressure levels in the farfield. Careful examination of the bandpass filtered pressure field suggested that there are two mechanisms of sound radiation in unheated subsonic jets that can occur in all scales of turbulence. The first mechanism is the stretching and the distortion of coherent vortical structures, especially close to the termination of the potential core. As eddies are bent or stretched, a portion of their kinetic energy is radiated. This mechanism is quadrupolar in nature, and is responsible for strong sound radiation at aft angles. The second sound generation mechanism appears to be associated with the transverse vibration of the shear-layer interface within the ambient quiescent flow, and has dipolar characteristics. This mechanism is believed to be responsible for sound radiation along the sideline directions. Jet noise suppression through the use of microjets was studied. The microjet injection induced secondary instabilities in the shear layer which triggered the transition to turbulence, and suppressed laminar vortex pairing. This in turn resulted in a reduction of OASPL at almost all observer locations. In all cases, the bandpass filtering of the nearfield flow and the associated sound provides revealing details of the sound radiation process. The results suggest that circumferential modes are significant and need to be included in future wavepacket models for jet noise prediction. Numerical simulations of sound radiation from nonpremixed flames were also performed. The simulations featured the solution of the fully compressible Navier-Stokes equations. Therefore, sound generation and radiation were directly captured in the simulations. A thickened flamelet model was proposed for nonpremixed flames. The model yields artificially thickened flames which can be better resolved on the computational grid, while retaining the physically currect values of the total heat released into the flow. Combustion noise has monopolar characteristics for low frequencies. For high frequencies, the sound field is no longer omni-directional. Major sources of sound appear to be located in the jet shear layer within one potential core length from the jet nozzle.
Hvitfeldt-Forsberg, Helena; Mazzocato, Pamela; Glaser, Daniel; Keller, Christina; Unbeck, Maria
2017-01-01
Objective To explore healthcare staffs’ and managers’ perceptions of how and when discrete event simulation modelling can be used as a decision support in improvement efforts. Design Two focus group discussions were performed. Setting Two settings were included: a rheumatology department and an orthopaedic section both situated in Sweden. Participants Healthcare staff and managers (n=13) from the two settings. Interventions Two workshops were performed, one at each setting. Workshops were initiated by a short introduction to simulation modelling. Results from the respective simulation model were then presented and discussed in the following focus group discussion. Results Categories from the content analysis are presented according to the following research questions: how and when simulation modelling can assist healthcare improvement? Regarding how, the participants mentioned that simulation modelling could act as a tool for support and a way to visualise problems, potential solutions and their effects. Regarding when, simulation modelling could be used both locally and by management, as well as a pedagogical tool to develop and test innovative ideas and to involve everyone in the improvement work. Conclusions Its potential as an information and communication tool and as an instrument for pedagogic work within healthcare improvement render a broader application and value of simulation modelling than previously reported. PMID:28588107
Jung, Ho-Won; El Emam, Khaled
2014-05-29
A linear programming (LP) model was proposed to create de-identified data sets that maximally include spatial detail (e.g., geocodes such as ZIP or postal codes, census blocks, and locations on maps) while complying with the HIPAA Privacy Rule's Expert Determination method, i.e., ensuring that the risk of re-identification is very small. The LP model determines the transition probability from an original location of a patient to a new randomized location. However, it has a limitation for the cases of areas with a small population (e.g., median of 10 people in a ZIP code). We extend the previous LP model to accommodate the cases of a smaller population in some locations, while creating de-identified patient spatial data sets which ensure the risk of re-identification is very small. Our LP model was applied to a data set of 11,740 postal codes in the City of Ottawa, Canada. On this data set we demonstrated the limitations of the previous LP model, in that it produces improbable results, and showed how our extensions to deal with small areas allows the de-identification of the whole data set. The LP model described in this study can be used to de-identify geospatial information for areas with small populations with minimal distortion to postal codes. Our LP model can be extended to include other information, such as age and gender.
NASA Astrophysics Data System (ADS)
Vannametee, E.; Karssenberg, D.; Hendriks, M. R.; de Jong, S. M.; Bierkens, M. F. P.
2010-05-01
We propose a modelling framework for distributed hydrological modelling of 103-105 km2 catchments by discretizing the catchment in geomorphologic units. Each of these units is modelled using a lumped model representative for the processes in the unit. Here, we focus on the development and parameterization of this lumped model as a component of our framework. The development of the lumped model requires rainfall-runoff data for an extensive set of geomorphological units. Because such large observational data sets do not exist, we create artificial data. With a high-resolution, physically-based, rainfall-runoff model, we create artificial rainfall events and resulting hydrographs for an extensive set of different geomorphological units. This data set is used to identify the lumped model of geomorphologic units. The advantage of this approach is that it results in a lumped model with a physical basis, with representative parameters that can be derived from point-scale measurable physical parameters. The approach starts with the development of the high-resolution rainfall-runoff model that generates an artificial discharge dataset from rainfall inputs as a surrogate of a real-world dataset. The model is run for approximately 105 scenarios that describe different characteristics of rainfall, properties of the geomorphologic units (i.e. slope gradient, unit length and regolith properties), antecedent moisture conditions and flow patterns. For each scenario-run, the results of the high-resolution model (i.e. runoff and state variables) at selected simulation time steps are stored in a database. The second step is to develop the lumped model of a geomorphological unit. This forward model consists of a set of simple equations that calculate Hortonian runoff and state variables of the geomorphologic unit over time. The lumped model contains only three parameters: a ponding factor, a linear reservoir parameter, and a lag time. The model is capable of giving an appropriate representation of the transient rainfall-runoff relations that exist in the artificial data set generated with the high-resolution model. The third step is to find the values of empirical parameters in the lumped forward model using the artificial dataset. For each scenario of the high-resolution model run, a set of lumped model parameters is determined with a fitting method using the corresponding time series of state variables and outputs retrieved from the database. Thus, the parameters in the lumped model can be estimated by using the artificial data set. The fourth step is to develop an approach to assign lumped model parameters based upon the properties of the geomorphological unit. This is done by finding relationships between the measurable physical properties of geomorphologic units (i.e. slope gradient, unit length, and regolith properties) and the lumped forward model parameters using multiple regression techniques. In this way, a set of lumped forward model parameters can be estimated as a function of morphology and physical properties of the geomorphologic units. The lumped forward model can then be applied to different geomorphologic units. Finally, the performance of the lumped forward model is evaluated; the outputs of the lumped forward model are compared with the results of the high-resolution model. Our results show that the lumped forward model gives the best estimates of total discharge volumes and peak discharges when rain intensities are not significantly larger than the infiltration capacities of the units and when the units are small with a flat gradient. Hydrograph shapes are fairly well reproduced for most cases except for flat and elongated units with large runoff volumes. The results of this study provide a first step towards developing low-dimensional models for large ungauged basins.
Ellouze, M; Pichaud, M; Bonaiti, C; Coroller, L; Couvert, O; Thuault, D; Vaillant, R
2008-11-30
Time temperature integrators or indicators (TTIs) are effective tools making the continuous monitoring of the time temperature history of chilled products possible throughout the cold chain. Their correct setting is of critical importance to ensure food quality. The objective of this study was to develop a model to facilitate accurate settings of the CRYOLOG biological TTI, TRACEO. Experimental designs were used to investigate and model the effects of the temperature, the TTI inoculum size, pH, and water activity on its response time. The modelling process went through several steps addressing growth, acidification and inhibition phenomena in dynamic conditions. The model showed satisfactory results and validations in industrial conditions gave clear evidence that such a model is a valuable tool, not only to predict accurate response times of TRACEO, but also to propose precise settings to manufacture the appropriate TTI to trace a particular food according to a given time temperature scenario.
Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.
Mohan, B M; Sinha, Arpita
2008-07-01
This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.
Set statistics in conductive bridge random access memory device with Cu/HfO{sub 2}/Pt structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Meiyun; Long, Shibing, E-mail: longshibing@ime.ac.cn; Wang, Guoming
2014-11-10
The switching parameter variation of resistive switching memory is one of the most important challenges in its application. In this letter, we have studied the set statistics of conductive bridge random access memory with a Cu/HfO{sub 2}/Pt structure. The experimental distributions of the set parameters in several off resistance ranges are shown to nicely fit a Weibull model. The Weibull slopes of the set voltage and current increase and decrease logarithmically with off resistance, respectively. This experimental behavior is perfectly captured by a Monte Carlo simulator based on the cell-based set voltage statistics model and the Quantum Point Contact electronmore » transport model. Our work provides indications for the improvement of the switching uniformity.« less
Systematization of a set of closure techniques.
Hausken, Kjell; Moxnes, John F
2011-11-01
Approximations in population dynamics are gaining popularity since stochastic models in large populations are time consuming even on a computer. Stochastic modeling causes an infinite set of ordinary differential equations for the moments. Closure models are useful since they recast this infinite set into a finite set of ordinary differential equations. This paper systematizes a set of closure approximations. We develop a system, which we call a power p closure of n moments, where 0≤p≤n. Keeling's (2000a,b) approximation with third order moments is shown to be an instantiation of this system which we call a power 3 closure of 3 moments. We present an epidemiological example and evaluate the system for third and fourth moments compared with Monte Carlo simulations. Copyright © 2011 Elsevier Inc. All rights reserved.
Using partial site aggregation to reduce bias in random utility travel cost models
NASA Astrophysics Data System (ADS)
Lupi, Frank; Feather, Peter M.
1998-12-01
We propose a "partial aggregation" strategy for defining the recreation sites that enter choice sets in random utility models. Under the proposal, the most popular sites and sites that will be the subject of policy analysis enter choice sets as individual sites while remaining sites are aggregated into groups of similar sites. The scheme balances the desire to include all potential substitute sites in the choice sets with practical data and modeling constraints. Unlike fully aggregate models, our analysis and empirical applications suggest that the partial aggregation approach reasonably approximates the results of a disaggregate model. The partial aggregation approach offers all of the data and computational advantages of models with aggregate sites but does not suffer from the same degree of bias as fully aggregate models.
Scaling predictive modeling in drug development with cloud computing.
Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola
2015-01-26
Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.
Data identification for improving gene network inference using computational algebra.
Dimitrova, Elena; Stigler, Brandilyn
2014-11-01
Identification of models of gene regulatory networks is sensitive to the amount of data used as input. Considering the substantial costs in conducting experiments, it is of value to have an estimate of the amount of data required to infer the network structure. To minimize wasted resources, it is also beneficial to know which data are necessary to identify the network. Knowledge of the data and knowledge of the terms in polynomial models are often required a priori in model identification. In applications, it is unlikely that the structure of a polynomial model will be known, which may force data sets to be unnecessarily large in order to identify a model. Furthermore, none of the known results provides any strategy for constructing data sets to uniquely identify a model. We provide a specialization of an existing criterion for deciding when a set of data points identifies a minimal polynomial model when its monomial terms have been specified. Then, we relax the requirement of the knowledge of the monomials and present results for model identification given only the data. Finally, we present a method for constructing data sets that identify minimal polynomial models.
QSAR Modeling of Rat Acute Toxicity by Oral Exposure
Zhu, Hao; Martin, Todd M.; Ye, Lin; Sedykh, Alexander; Young, Douglas M.; Tropsha, Alexander
2009-01-01
Few Quantitative Structure-Activity Relationship (QSAR) studies have successfully modeled large, diverse rodent toxicity endpoints. In this study, a comprehensive dataset of 7,385 compounds with their most conservative lethal dose (LD50) values has been compiled. A combinatorial QSAR approach has been employed to develop robust and predictive models of acute toxicity in rats caused by oral exposure to chemicals. To enable fair comparison between the predictive power of models generated in this study versus a commercial toxicity predictor, TOPKAT (Toxicity Prediction by Komputer Assisted Technology), a modeling subset of the entire dataset was selected that included all 3,472 compounds used in the TOPKAT’s training set. The remaining 3,913 compounds, which were not present in the TOPKAT training set, were used as the external validation set. QSAR models of five different types were developed for the modeling set. The prediction accuracy for the external validation set was estimated by determination coefficient R2 of linear regression between actual and predicted LD50 values. The use of the applicability domain threshold implemented in most models generally improved the external prediction accuracy but expectedly led to the decrease in chemical space coverage; depending on the applicability domain threshold, R2 ranged from 0.24 to 0.70. Ultimately, several consensus models were developed by averaging the predicted LD50 for every compound using all 5 models. The consensus models afforded higher prediction accuracy for the external validation dataset with the higher coverage as compared to individual constituent models. The validated consensus LD50 models developed in this study can be used as reliable computational predictors of in vivo acute toxicity. PMID:19845371
NASA Technical Reports Server (NTRS)
Schwan, Karsten
1994-01-01
Atmospheric modeling is a grand challenge problem for several reasons, including its inordinate computational requirements and its generation of large amounts of data concurrent with its use of very large data sets derived from measurement instruments like satellites. In addition, atmospheric models are typically run several times, on new data sets or to reprocess existing data sets, to investigate or reinvestigate specific chemical or physical processes occurring in the earth's atmosphere, to understand model fidelity with respect to observational data, or simply to experiment with specific model parameters or components.
Understanding a Basic Biological Process: Expert and Novice Models of Science.
ERIC Educational Resources Information Center
Kindfield, A. C. H.
1994-01-01
Reports on the meiosis models utilized by five individuals at each of three levels of expertise in genetics as each reasoned about this process in an individual interview setting. Results revealed a set of biologically correct features common to all individuals' models as well as a variety of model flaws (i.e., meiosis misunderstandings) which are…
Setting up virgin stress conditions in discrete element models.
Rojek, J; Karlis, G F; Malinowski, L J; Beer, G
2013-03-01
In the present work, a methodology for setting up virgin stress conditions in discrete element models is proposed. The developed algorithm is applicable to discrete or coupled discrete/continuum modeling of underground excavation employing the discrete element method (DEM). Since the DEM works with contact forces rather than stresses there is a need for the conversion of pre-excavation stresses to contact forces for the DEM model. Different possibilities of setting up virgin stress conditions in the DEM model are reviewed and critically assessed. Finally, a new method to obtain a discrete element model with contact forces equivalent to given macroscopic virgin stresses is proposed. The test examples presented show that good results may be obtained regardless of the shape of the DEM domain.
Setting up virgin stress conditions in discrete element models
Rojek, J.; Karlis, G.F.; Malinowski, L.J.; Beer, G.
2013-01-01
In the present work, a methodology for setting up virgin stress conditions in discrete element models is proposed. The developed algorithm is applicable to discrete or coupled discrete/continuum modeling of underground excavation employing the discrete element method (DEM). Since the DEM works with contact forces rather than stresses there is a need for the conversion of pre-excavation stresses to contact forces for the DEM model. Different possibilities of setting up virgin stress conditions in the DEM model are reviewed and critically assessed. Finally, a new method to obtain a discrete element model with contact forces equivalent to given macroscopic virgin stresses is proposed. The test examples presented show that good results may be obtained regardless of the shape of the DEM domain. PMID:27087731
The effects of delay duration on visual working memory for orientation.
Shin, Hongsup; Zou, Qijia; Ma, Wei Ji
2017-12-01
We used a delayed-estimation paradigm to characterize the joint effects of set size (one, two, four, or six) and delay duration (1, 2, 3, or 6 s) on visual working memory for orientation. We conducted two experiments: one with delay durations blocked, another with delay durations interleaved. As dependent variables, we examined four model-free metrics of dispersion as well as precision estimates in four simple models. We tested for effects of delay time using analyses of variance, linear regressions, and nested model comparisons. We found significant effects of set size and delay duration on both model-free and model-based measures of dispersion. However, the effect of delay duration was much weaker than that of set size, dependent on the analysis method, and apparent in only a minority of subjects. The highest forgetting slope found in either experiment at any set size was a modest 1.14°/s. As secondary results, we found a low rate of nontarget reports, and significant estimation biases towards oblique orientations (but no dependence of their magnitude on either set size or delay duration). Relative stability of working memory even at higher set sizes is consistent with earlier results for motion direction and spatial frequency. We compare with a recent study that performed a very similar experiment.
Zarr, Robert R; Heckert, N Alan; Leigh, Stefan D
2014-01-01
Thermal conductivity data acquired previously for the establishment of Standard Reference Material (SRM) 1450, Fibrous Glass Board, as well as subsequent renewals 1450a, 1450b, 1450c, and 1450d, are re-analyzed collectively and as individual data sets. Additional data sets for proto-1450 material lots are also included in the analysis. The data cover 36 years of activity by the National Institute of Standards and Technology (NIST) in developing and providing thermal insulation SRMs, specifically high-density molded fibrous-glass board, to the public. Collectively, the data sets cover two nominal thicknesses of 13 mm and 25 mm, bulk densities from 60 kg·m−3 to 180 kg·m−3, and mean temperatures from 100 K to 340 K. The analysis repetitively fits six models to the individual data sets. The most general form of the nested set of multilinear models used is given in the following equation: λ(ρ,T)=a0+a1ρ+a2T+a3T3+a4e−(T−a5a6)2where λ(ρ,T) is the predicted thermal conductivity (W·m−1·K−1), ρ is the bulk density (kg·m−3), T is the mean temperature (K) and ai (for i = 1, 2, … 6) are the regression coefficients. The least squares fit results for each model across all data sets are analyzed using both graphical and analytic techniques. The prevailing generic model for the majority of data sets is the bilinear model in ρ and T. λ(ρ,T)=a0+a1ρ+a2T One data set supports the inclusion of a cubic temperature term and two data sets with low-temperature data support the inclusion of an exponential term in T to improve the model predictions. Physical interpretations of the model function terms are described. Recommendations for future renewals of SRM 1450 are provided. An Addendum provides historical background on the origin of this SRM and the influence of the SRM on external measurement programs. PMID:26601034
What Time is Your Sunset? Accounting for Refraction in Sunrise/set Prediction Models
NASA Astrophysics Data System (ADS)
Wilson, Teresa; Bartlett, Jennifer Lynn; Chizek Frouard, Malynda; Hilton, James; Phlips, Alan; Edgar, Roman
2018-01-01
Algorithms that predict sunrise and sunset times currently have an uncertainty of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, including difficulties determining whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction.We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We, then, compared these predictions with data sets of observed rise/set times taken from Mount Wilson Observatory in California, University of Alberta in Edmonton, Alberta, and onboard the SS James Franco in the Atlantic. A thorough investigation of the problem requires a more substantial data set of observed rise/set times and corresponding meteorological data from around the world.We have developed a mobile application, Sunrise & Sunset Observer, so that anyone can capture this astronomical and meteorological data using their smartphone video recorder as part of a citizen science project. The Android app for this project is available in the Google Play store. Videos can also be submitted through the project website (riseset.phy.mtu.edu). Data analysis will lead to more complete models that will provide higher accuracy rise/set predictions to benefit astronomers, navigators, and outdoorsmen everywhere.
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.
Ribay, Kathryn; Kim, Marlene T; Wang, Wenyi; Pinolini, Daniel; Zhu, Hao
2016-03-01
Estrogen receptors (ERα) are a critical target for drug design as well as a potential source of toxicity when activated unintentionally. Thus, evaluating potential ERα binding agents is critical in both drug discovery and chemical toxicity areas. Using computational tools, e.g., Quantitative Structure-Activity Relationship (QSAR) models, can predict potential ERα binding agents before chemical synthesis. The purpose of this project was to develop enhanced predictive models of ERα binding agents by utilizing advanced cheminformatics tools that can integrate publicly available bioassay data. The initial ERα binding agent data set, consisting of 446 binders and 8307 non-binders, was obtained from the Tox21 Challenge project organized by the NIH Chemical Genomics Center (NCGC). After removing the duplicates and inorganic compounds, this data set was used to create a training set (259 binders and 259 non-binders). This training set was used to develop QSAR models using chemical descriptors. The resulting models were then used to predict the binding activity of 264 external compounds, which were available to us after the models were developed. The cross-validation results of training set [Correct Classification Rate (CCR) = 0.72] were much higher than the external predictivity of the unknown compounds (CCR = 0.59). To improve the conventional QSAR models, all compounds in the training set were used to search PubChem and generate a profile of their biological responses across thousands of bioassays. The most important bioassays were prioritized to generate a similarity index that was used to calculate the biosimilarity score between each two compounds. The nearest neighbors for each compound within the set were then identified and its ERα binding potential was predicted by its nearest neighbors in the training set. The hybrid model performance (CCR = 0.94 for cross validation; CCR = 0.68 for external prediction) showed significant improvement over the original QSAR models, particularly for the activity cliffs that induce prediction errors. The results of this study indicate that the response profile of chemicals from public data provides useful information for modeling and evaluation purposes. The public big data resources should be considered along with chemical structure information when predicting new compounds, such as unknown ERα binding agents.
Li, Huixia; Luo, Miyang; Zheng, Jianfei; Luo, Jiayou; Zeng, Rong; Feng, Na; Du, Qiyun; Fang, Junqun
2017-02-01
An artificial neural network (ANN) model was developed to predict the risks of congenital heart disease (CHD) in pregnant women.This hospital-based case-control study involved 119 CHD cases and 239 controls all recruited from birth defect surveillance hospitals in Hunan Province between July 2013 and June 2014. All subjects were interviewed face-to-face to fill in a questionnaire that covered 36 CHD-related variables. The 358 subjects were randomly divided into a training set and a testing set at the ratio of 85:15. The training set was used to identify the significant predictors of CHD by univariate logistic regression analyses and develop a standard feed-forward back-propagation neural network (BPNN) model for the prediction of CHD. The testing set was used to test and evaluate the performance of the ANN model. Univariate logistic regression analyses were performed on SPSS 18.0. The ANN models were developed on Matlab 7.1.The univariate logistic regression identified 15 predictors that were significantly associated with CHD, including education level (odds ratio = 0.55), gravidity (1.95), parity (2.01), history of abnormal reproduction (2.49), family history of CHD (5.23), maternal chronic disease (4.19), maternal upper respiratory tract infection (2.08), environmental pollution around maternal dwelling place (3.63), maternal exposure to occupational hazards (3.53), maternal mental stress (2.48), paternal chronic disease (4.87), paternal exposure to occupational hazards (2.51), intake of vegetable/fruit (0.45), intake of fish/shrimp/meat/egg (0.59), and intake of milk/soymilk (0.55). After many trials, we selected a 3-layer BPNN model with 15, 12, and 1 neuron in the input, hidden, and output layers, respectively, as the best prediction model. The prediction model has accuracies of 0.91 and 0.86 on the training and testing sets, respectively. The sensitivity, specificity, and Yuden Index on the testing set (training set) are 0.78 (0.83), 0.90 (0.95), and 0.68 (0.78), respectively. The areas under the receiver operating curve on the testing and training sets are 0.87 and 0.97, respectively.This study suggests that the BPNN model could be used to predict the risk of CHD in individuals. This model should be further improved by large-sample-size research.
Lang, John C; Abrams, Daniel M; De Sterck, Hans
2015-12-22
Smoking of tobacco is estimated to have caused approximately six million deaths worldwide in 2014. Responding effectively to this epidemic requires a thorough understanding of how smoking behaviour is transmitted and modified. We present a new mathematical model of the social dynamics that cause cigarette smoking to spread in a population, incorporating aspects of individual and social utility. Model predictions are tested against two independent data sets spanning 25 countries: a newly compiled century-long composite data set on smoking prevalence, and Hofstede's individualism/collectivism measure (IDV). The general model prediction that more individualistic societies will show faster adoption and cessation of smoking is supported by the full 25 country smoking prevalence data set. Calibration of the model to the available smoking prevalence data is possible in a subset of 7 countries. Consistency of fitted model parameters with an additional, independent, data set further supports our model: the fitted value of the country-specific model parameter that determines the relative importance of social and individual factors in the decision of whether or not to smoke, is found to be significantly correlated with Hofstede's IDV for the 25 countries in our data set. Our model in conjunction with extensive data on smoking prevalence provides evidence for the hypothesis that individualism/collectivism may have an important influence on the dynamics of smoking prevalence at the aggregate, population level. Significant implications for public health interventions are discussed.
EzGal: A Flexible Interface for Stellar Population Synthesis Models
NASA Astrophysics Data System (ADS)
Mancone, Conor L.; Gonzalez, Anthony H.
2012-06-01
We present EzGal, a flexible Python program designed to easily generate observable parameters (magnitudes, colors, and mass-to-light ratios) for arbitrary input stellar population synthesis (SPS) models. As has been demonstrated by various authors, for many applications the choice of input SPS models can be a significant source of systematic uncertainty. A key strength of EzGal is that it enables simple, direct comparison of different model sets so that the uncertainty introduced by choice of model set can be quantified. Its ability to work with new models will allow EzGal to remain useful as SPS modeling evolves to keep up with the latest research (such as varying IMFs). EzGal is also capable of generating composite stellar population models (CSPs) for arbitrary input star-formation histories and reddening laws, and it can be used to interpolate between metallicities for a given model set. To facilitate use, we have created an online interface to run EzGal and quickly generate magnitude and mass-to-light ratio predictions for a variety of star-formation histories and model sets. We make many commonly used SPS models available from the online interface, including the canonical Bruzual & Charlot models, an updated version of these models, the Maraston models, the BaSTI models, and the Flexible Stellar Population Synthesis (FSPS) models. We use EzGal to compare magnitude predictions for the model sets as a function of wavelength, age, metallicity, and star-formation history. From this comparison we quickly recover the well-known result that the models agree best in the optical for old solar-metallicity models, with differences at the level. Similarly, the most problematic regime for SPS modeling is for young ages (≲2 Gyr) and long wavelengths (λ ≳ 7500 Å), where thermally pulsating AGB stars are important and scatter between models can vary from 0.3 mag (Sloan i) to 0.7 mag (Ks). We find that these differences are not caused by one discrepant model set and should therefore be interpreted as general uncertainties in SPS modeling. Finally, we connect our results to a more physically motivated example by generating CSPs with a star-formation history matching the global star-formation history of the universe. We demonstrate that the wavelength and age dependence of SPS model uncertainty translates into a redshift-dependent model uncertainty, highlighting the importance of a quantitative understanding of model differences when comparing observations with models as a function of redshift.
Atmospheric model development in support of SEASAT. Volume 2: Analysis models
NASA Technical Reports Server (NTRS)
Langland, R. A.
1977-01-01
As part of the SEASAT program of NASA, two sets of analysis programs were developed for the Jet Propulsion Laboratory. One set of programs produce 63 x 63 horizontal mesh analyses on a polar stereographic grid. The other set produces 187 x 187 third mesh analyses. The parameters analyzed include sea surface temperature, sea level pressure and twelve levels of upper air temperature, height and wind analyses. The analysis output is used to initialize the primitive equation forecast models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Troy Michael; Kress, Joel David; Bhat, Kabekode Ghanasham
Year 1 Objectives (August 2016 – December 2016) – The original Independence model is a sequentially regressed set of parameters from numerous data sets in the Aspen Plus modeling framework. The immediate goal with the basic data model is to collect and evaluate those data sets relevant to the thermodynamic submodels (pure substance heat capacity, solvent mixture heat capacity, loaded solvent heat capacities, and volatility data). These data are informative for the thermodynamic parameters involved in both vapor-liquid equilibrium, and in the chemical equilibrium of the liquid phase.
NASA Technical Reports Server (NTRS)
Herrmann, M.
2003-01-01
This paper is divided into four parts. First, the level set/vortex sheet method for three-dimensional two-phase interface dynamics is presented. Second, the LSS model for the primary breakup of turbulent liquid jets and sheets is outlined and all terms requiring subgrid modeling are identified. Then, preliminary three-dimensional results of the level set/vortex sheet method are presented and discussed. Finally, conclusions are drawn and an outlook to future work is given.
HiVy automated translation of stateflow designs for model checking verification
NASA Technical Reports Server (NTRS)
Pingree, Paula
2003-01-01
tool set enables model checking of finite state machines designs. This is acheived by translating state-chart specifications into the input language of the Spin model checker. An abstract syntax of hierarchical sequential automata (HSA) is provided as an intermediate format tool set.
REVIEW OF THE ATTRIBUTES AND PERFORMANCE OF SIX URBAN DIFFUSION MODELS
The American Meteorological Society conducted a scientific review of a set of six urban diffusion models. TRC Environmental Consultants, Inc. calculated and tabulated a uniform set of statistics for all the models. The report consists of a summary and copies of the three independ...
The Objective Borderline Method: A Probabilistic Method for Standard Setting
ERIC Educational Resources Information Center
Shulruf, Boaz; Poole, Phillippa; Jones, Philip; Wilkinson, Tim
2015-01-01
A new probability-based standard setting technique, the Objective Borderline Method (OBM), was introduced recently. This was based on a mathematical model of how test scores relate to student ability. The present study refined the model and tested it using 2500 simulated data-sets. The OBM was feasible to use. On average, the OBM performed well…
USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS
A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...
2018-03-28
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
Tóth, Gergely; Bodai, Zsolt; Héberger, Károly
2013-10-01
Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
Equal Area Logistic Estimation for Item Response Theory
NASA Astrophysics Data System (ADS)
Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li
2009-08-01
Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.
Modelling machine ensembles with discrete event dynamical system theory
NASA Technical Reports Server (NTRS)
Hunter, Dan
1990-01-01
Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).
Interactive model evaluation tool based on IPython notebook
NASA Astrophysics Data System (ADS)
Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet
2015-04-01
In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).
Stacked Denoising Autoencoders Applied to Star/Galaxy Classification
NASA Astrophysics Data System (ADS)
Qin, Hao-ran; Lin, Ji-ming; Wang, Jun-yi
2017-04-01
In recent years, the deep learning algorithm, with the characteristics of strong adaptability, high accuracy, and structural complexity, has become more and more popular, but it has not yet been used in astronomy. In order to solve the problem that the star/galaxy classification accuracy is high for the bright source set, but low for the faint source set of the Sloan Digital Sky Survey (SDSS) data, we introduced the new deep learning algorithm, namely the SDA (stacked denoising autoencoder) neural network and the dropout fine-tuning technique, which can greatly improve the robustness and antinoise performance. We randomly selected respectively the bright source sets and faint source sets from the SDSS DR12 and DR7 data with spectroscopic measurements, and made preprocessing on them. Then, we randomly selected respectively the training sets and testing sets without replacement from the bright source sets and faint source sets. At last, using these training sets we made the training to obtain the SDA models of the bright sources and faint sources in the SDSS DR7 and DR12, respectively. We compared the test result of the SDA model on the DR12 testing set with the test results of the Library for Support Vector Machines (LibSVM), J48 decision tree, Logistic Model Tree (LMT), Support Vector Machine (SVM), Logistic Regression, and Decision Stump algorithm, and compared the test result of the SDA model on the DR7 testing set with the test results of six kinds of decision trees. The experiments show that the SDA has a better classification accuracy than other machine learning algorithms for the faint source sets of DR7 and DR12. Especially, when the completeness function is used as the evaluation index, compared with the decision tree algorithms, the correctness rate of SDA has improved about 15% for the faint source set of SDSS-DR7.
Precision assessment of model-based RSA for a total knee prosthesis in a biplanar set-up.
Trozzi, C; Kaptein, B L; Garling, E H; Shelyakova, T; Russo, A; Bragonzoni, L; Martelli, S
2008-10-01
Model-based Roentgen Stereophotogrammetric Analysis (RSA) was recently developed for the measurement of prosthesis micromotion. Its main advantage is that markers do not need to be attached to the implants as traditional marker-based RSA requires. Model-based RSA has only been tested in uniplanar radiographic set-ups. A biplanar set-up would theoretically facilitate the pose estimation algorithm, since radiographic projections would show more different shape features of the implants than in uniplanar images. We tested the precision of model-based RSA and compared it with that of the traditional marker-based method in a biplanar set-up. Micromotions of both tibial and femoral components were measured with both the techniques from double examinations of patients participating in a clinical study. The results showed that in the biplanar set-up model-based RSA presents a homogeneous distribution of precision for all the translation directions, but an inhomogeneous error for rotations, especially internal-external rotation presented higher errors than rotations about the transverse and sagittal axes. Model-based RSA was less precise than the marker-based method, although the differences were not significant for the translations and rotations of the tibial component, with the exception of the internal-external rotations. For both prosthesis components the precisions of model-based RSA were below 0.2 mm for all the translations, and below 0.3 degrees for rotations about transverse and sagittal axes. These values are still acceptable for clinical studies aimed at evaluating total knee prosthesis micromotion. In a biplanar set-up model-based RSA is a valid alternative to traditional marker-based RSA where marking of the prosthesis is an enormous disadvantage.
Chen, Jonathan H; Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B
2017-05-01
Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% ( P < 10 -20 ) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., "critical care," "pneumonia," "neurologic evaluation"). Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B
2017-01-01
Objective: Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. Materials and Methods: The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Results: Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% (P < 10−20) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., “critical care,” “pneumonia,” “neurologic evaluation”). Discussion: Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Conclusion: Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. PMID:27655861
Data Programming: Creating Large Training Sets, Quickly.
Ratner, Alexander; De Sa, Christopher; Wu, Sen; Selsam, Daniel; Ré, Christopher
2016-12-01
Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions , which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can "denoise" the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable.
Data Programming: Creating Large Training Sets, Quickly
Ratner, Alexander; De Sa, Christopher; Wu, Sen; Selsam, Daniel; Ré, Christopher
2018-01-01
Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions, which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can “denoise” the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable. PMID:29872252
Ng, Hui Wen; Doughty, Stephen W; Luo, Heng; Ye, Hao; Ge, Weigong; Tong, Weida; Hong, Huixiao
2015-12-21
Some chemicals in the environment possess the potential to interact with the endocrine system in the human body. Multiple receptors are involved in the endocrine system; estrogen receptor α (ERα) plays very important roles in endocrine activity and is the most studied receptor. Understanding and predicting estrogenic activity of chemicals facilitates the evaluation of their endocrine activity. Hence, we have developed a decision forest classification model to predict chemical binding to ERα using a large training data set of 3308 chemicals obtained from the U.S. Food and Drug Administration's Estrogenic Activity Database. We tested the model using cross validations and external data sets of 1641 chemicals obtained from the U.S. Environmental Protection Agency's ToxCast project. The model showed good performance in both internal (92% accuracy) and external validations (∼ 70-89% relative balanced accuracies), where the latter involved the validations of the model across different ER pathway-related assays in ToxCast. The important features that contribute to the prediction ability of the model were identified through informative descriptor analysis and were related to current knowledge of ER binding. Prediction confidence analysis revealed that the model had both high prediction confidence and accuracy for most predicted chemicals. The results demonstrated that the model constructed based on the large training data set is more accurate and robust for predicting ER binding of chemicals than the published models that have been developed using much smaller data sets. The model could be useful for the evaluation of ERα-mediated endocrine activity potential of environmental chemicals.
Cui, Jiangyu; Zhou, Yumin; Tian, Jia; Wang, Xinwang; Zheng, Jingping; Zhong, Nanshan; Ran, Pixin
2012-12-01
COPD is often underdiagnosed in a primary care setting where the spirometry is unavailable. This study was aimed to develop a simple, economical and applicable model for COPD screening in those settings. First we established a discriminant function model based on Bayes' Rule by stepwise discriminant analysis, using the data from 243 COPD patients and 112 non-COPD subjects from our COPD survey in urban and rural communities and local primary care settings in Guangdong Province, China. We then used this model to discriminate COPD in additional 150 subjects (50 non-COPD and 100 COPD ones) who had been recruited by the same methods as used to have established the model. All participants completed pre- and post-bronchodilator spirometry and questionnaires. COPD was diagnosed according to the Global Initiative for Chronic Obstructive Lung Disease criteria. The sensitivity and specificity of the discriminant function model was assessed. THE ESTABLISHED DISCRIMINANT FUNCTION MODEL INCLUDED NINE VARIABLES: age, gender, smoking index, body mass index, occupational exposure, living environment, wheezing, cough and dyspnoea. The sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, accuracy and error rate of the function model to discriminate COPD were 89.00%, 82.00%, 4.94, 0.13, 86.66% and 13.34%, respectively. The accuracy and Kappa value of the function model to predict COPD stages were 70% and 0.61 (95% CI, 0.50 to 0.71). This discriminant function model may be used for COPD screening in primary care settings in China as an alternative option instead of spirometry.
Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael
2016-12-16
As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. ©Wei Luo, Dinh Phung, Truyen Tran, Sunil Gupta, Santu Rana, Chandan Karmakar, Alistair Shilton, John Yearwood, Nevenka Dimitrova, Tu Bao Ho, Svetha Venkatesh, Michael Berk. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.12.2016.
A model providing long-term data sets of energetic electron precipitation during geomagnetic storms
NASA Astrophysics Data System (ADS)
van de Kamp, M.; Seppälä, A.; Clilverd, M. A.; Rodger, C. J.; Verronen, P. T.; Whittaker, I. C.
2016-10-01
The influence of solar variability on the polar atmosphere and climate due to energetic electron precipitation (EEP) has remained an open question largely due to lack of a long-term EEP forcing data set that could be used in chemistry-climate models. Motivated by this, we have developed a model for 30-1000 keV radiation belt driven EEP. The model is based on precipitation data from low Earth orbiting POES satellites in the period 2002-2012 and empirically described plasmasphere structure, which are both scaled to a geomagnetic index. This geomagnetic index is the only input of the model and can be either Dst or Ap. Because of this, the model can be used to calculate the energy-flux spectrum of precipitating electrons from 1957 (Dst) or 1932 (Ap) onward, with a time resolution of 1 day. Results from the model compare well with EEP observations over the period of 2002-2012. Using the model avoids the challenges found in measured data sets concerning proton contamination. As demonstrated, the model results can be used to produce the first ever >80 year long atmospheric ionization rate data set for radiation belt EEP. The impact of precipitation in this energy range is mainly seen at altitudes 70-110 km. The ionization rate data set, which is available for the scientific community, will enable simulations of EEP impacts on the atmosphere and climate with realistic EEP variability. Due to limitations in this first version of the model, the results most likely represent an underestimation of the total EEP effect.
NASA Astrophysics Data System (ADS)
Neill, Aaron; Reaney, Sim
2015-04-01
Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to those derived from the oxygen-18 data to see how well the model captured catchment dynamics. The value of incorporating the oxygen-18 data set, as well as discharge data sets from multiple as opposed to single gauging stations in each catchment, in the calibration process to improve the predictive capability of the model was then investigated. This was achieved by assessing by how much the identifiability of the model parameters and the ability of the model to represent the runoff processes operating in each catchment improved with the inclusion of the additional data sets with respect to the likely costs that would be incurred in obtaining the data sets themselves.
A Risk Stratification Model for Lung Cancer Based on Gene Coexpression Network and Deep Learning
2018-01-01
Risk stratification model for lung cancer with gene expression profile is of great interest. Instead of previous models based on individual prognostic genes, we aimed to develop a novel system-level risk stratification model for lung adenocarcinoma based on gene coexpression network. Using multiple microarray, gene coexpression network analysis was performed to identify survival-related networks. A deep learning based risk stratification model was constructed with representative genes of these networks. The model was validated in two test sets. Survival analysis was performed using the output of the model to evaluate whether it could predict patients' survival independent of clinicopathological variables. Five networks were significantly associated with patients' survival. Considering prognostic significance and representativeness, genes of the two survival-related networks were selected for input of the model. The output of the model was significantly associated with patients' survival in two test sets and training set (p < 0.00001, p < 0.0001 and p = 0.02 for training and test sets 1 and 2, resp.). In multivariate analyses, the model was associated with patients' prognosis independent of other clinicopathological features. Our study presents a new perspective on incorporating gene coexpression networks into the gene expression signature and clinical application of deep learning in genomic data science for prognosis prediction. PMID:29581968
NASA Astrophysics Data System (ADS)
McDonough, Kevin K.
The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of these sets for aircraft longitudinal and lateral aircraft dynamics are reported, and it is shown that these sets can be larger in size compared to the more commonly used safe sets. An approach to constrained maneuver planning based on chaining recoverable sets or integral safe sets is described and illustrated with a simulation example. To facilitate the application of this maneuver planning approach in aircraft loss of control (LOC) situations when the model is only identified at the current trim condition but when these sets need to be predicted at other flight conditions, the dependence trends of the safe and recoverable sets on aircraft flight conditions are characterized. The scaling procedure to estimate subsets of safe and recoverable sets at one trim condition based on their knowledge at another trim condition is defined. Finally, two control schemes that exploit integral safe sets are proposed. The first scheme, referred to as the controller state governor (CSG), resets the controller state (typically an integrator) to enforce the constraints and enlarge the set of plant states that can be recovered without constraint violation. The second scheme, referred to as the controller state and reference governor (CSRG), combines the controller state governor with the reference governor control architecture and provides the capability of simultaneously modifying the reference command and the controller state to enforce the constraints. Theoretical results that characterize the response properties of both schemes are presented. Examples are reported that illustrate the operation of these schemes on aircraft flight dynamics models and gas turbine engine dynamic models.
Data for Environmental Modeling (D4EM): Background and Applications of Data Automation
The Data for Environmental Modeling (D4EM) project demonstrates the development of a comprehensive set of open source software tools that overcome obstacles to accessing data needed by automating the process of populating model input data sets with environmental data available fr...
A Model Process for Institutional Goals-Setting. A Module of the Needs Assessment Project.
ERIC Educational Resources Information Center
King, Maxwell C.; And Others
A goals-setting model for the community/junior college that would interface with the community needs assessment model was developed, using as the survey instrument the Institutional Goals Inventory (I.G.I.) developed by the Educational Testing Service. The nine steps in the model are: Establish Committee on College Goals and Identify Goals Project…
Organization Domain Modeling. Volume 1. Conceptual Foundations, Process and Workproduct Description
1993-07-31
J.A. Hess, W.E. Novak, and A.S. Peterson. Feature-Oriented Domain Analysis ( FODA ) Feasibility Study. Technical Report CMU/SEI-90-TR-21, Software...domain analysis (DA) and modeling, including a structured set of workproducts, a tailorable process model and a set of modeling techniques and guidelines...23 5.3.1 U sability Analysis (Rescoping) ..................................................... 24
The procedures used in setting up the agricultural production model used in a study of alternatives for reducing insecticides on cotton and corn are described. The major analytical tool used is a spatial equilibrium model of U.S. agriculture. This is a linear programming model th...
An interactive environment for the analysis of large Earth observation and model data sets
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.
1993-01-01
We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X DataSlice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.
An interactive environment for the analysis of large Earth observation and model data sets
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.
1992-01-01
We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X Data Slice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.
Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit
Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin; ...
2017-04-26
Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less
Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin
Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less
On the Asymptotic Relative Efficiency of Planned Missingness Designs.
Rhemtulla, Mijke; Savalei, Victoria; Little, Todd D
2016-03-01
In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an "X set" of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.
Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle
NASA Astrophysics Data System (ADS)
Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.
2017-04-01
Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.
Determination of the Parameter Sets for the Best Performance of IPS-driven ENLIL Model
NASA Astrophysics Data System (ADS)
Yun, Jongyeon; Choi, Kyu-Cheol; Yi, Jonghyuk; Kim, Jaehun; Odstrcil, Dusan
2016-12-01
Interplanetary scintillation-driven (IPS-driven) ENLIL model was jointly developed by University of California, San Diego (UCSD) and National Aeronaucics and Space Administration/Goddard Space Flight Center (NASA/GSFC). The model has been in operation by Korean Space Weather Cetner (KSWC) since 2014. IPS-driven ENLIL model has a variety of ambient solar wind parameters and the results of the model depend on the combination of these parameters. We have conducted researches to determine the best combination of parameters to improve the performance of the IPS-driven ENLIL model. The model results with input of 1,440 combinations of parameters are compared with the Advanced Composition Explorer (ACE) observation data. In this way, the top 10 parameter sets showing best performance were determined. Finally, the characteristics of the parameter sets were analyzed and application of the results to IPS-driven ENLIL model was discussed.
Ignorance is a bliss: Mathematical structure of many-box models
NASA Astrophysics Data System (ADS)
Tylec, Tomasz I.; Kuś, Marek
2018-03-01
We show that the propositional system of a many-box model is always a set-representable effect algebra. In particular cases of 2-box and 1-box models, it is an orthomodular poset and an orthomodular lattice, respectively. We discuss the relation of the obtained results with the so-called Local Orthogonality principle. We argue that non-classical properties of box models are the result of a dual enrichment of the set of states caused by the impoverishment of the set of propositions. On the other hand, quantum mechanical models always have more propositions as well as more states than the classical ones. Consequently, we show that the box models cannot be considered as generalizations of quantum mechanical models and seeking additional principles that could allow us to "recover quantum correlations" in box models are, at least from the fundamental point of view, pointless.
Integrating high dimensional bi-directional parsing models for gene mention tagging.
Hsu, Chun-Nan; Chang, Yu-Ming; Kuo, Cheng-Ju; Lin, Yu-Shi; Huang, Han-Shen; Chung, I-Fang
2008-07-01
Tagging gene and gene product mentions in scientific text is an important initial step of literature mining. In this article, we describe in detail our gene mention tagger participated in BioCreative 2 challenge and analyze what contributes to its good performance. Our tagger is based on the conditional random fields model (CRF), the most prevailing method for the gene mention tagging task in BioCreative 2. Our tagger is interesting because it accomplished the highest F-scores among CRF-based methods and second over all. Moreover, we obtained our results by mostly applying open source packages, making it easy to duplicate our results. We first describe in detail how we developed our CRF-based tagger. We designed a very high dimensional feature set that includes most of information that may be relevant. We trained bi-directional CRF models with the same set of features, one applies forward parsing and the other backward, and integrated two models based on the output scores and dictionary filtering. One of the most prominent factors that contributes to the good performance of our tagger is the integration of an additional backward parsing model. However, from the definition of CRF, it appears that a CRF model is symmetric and bi-directional parsing models will produce the same results. We show that due to different feature settings, a CRF model can be asymmetric and the feature setting for our tagger in BioCreative 2 not only produces different results but also gives backward parsing models slight but constant advantage over forward parsing model. To fully explore the potential of integrating bi-directional parsing models, we applied different asymmetric feature settings to generate many bi-directional parsing models and integrate them based on the output scores. Experimental results show that this integrated model can achieve even higher F-score solely based on the training corpus for gene mention tagging. Data sets, programs and an on-line service of our gene mention tagger can be accessed at http://aiia.iis.sinica.edu.tw/biocreative2.htm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purdy, R.
A hierarchical model consisting of quantitative structure-activity relationships based mainly on chemical reactivity was developed to predict the carcinogenicity of organic chemicals to rodents. The model is comprised of quantitative structure-activity relationships, QSARs based on hypothesized mechanisms of action, metabolism, and partitioning. Predictors included octanol/water partition coefficient, molecular size, atomic partial charge, bond angle strain, atomic acceptor delocalizibility, atomic radical superdelocalizibility, the lowest unoccupied molecular orbital (LUMO) energy of hypothesized intermediate nitrenium ion of primary aromatic amines, difference in charge of ionized and unionized carbon-chlorine bonds, substituent size and pattern on polynuclear aromatic hydrocarbons, the distance between lone electron pairsmore » over a rigid structure, and the presence of functionalities such as nitroso and hydrazine. The model correctly classified 96% of the carcinogens in the training set of 306 chemicals, and 90% of the carcinogens in the test set of 301 chemicals. The test set by chance contained 84% of the positive thiocontaining chemicals. A QSAR for these chemicals was developed. This posttest set modified model correctly predicted 94% of the carcinogens in the test set. This model was used to predict the carcinogenicity of the 25 organic chemicals the U.S. National Toxicology Program was testing at the writing of this article. 12 refs., 3 tabs.« less
Predicting Mouse Liver Microsomal Stability with “Pruned” Machine Learning Models and Public Data
Perryman, Alexander L.; Stratton, Thomas P.; Ekins, Sean; Freundlich, Joel S.
2015-01-01
Purpose Mouse efficacy studies are a critical hurdle to advance translational research of potential therapeutic compounds for many diseases. Although mouse liver microsomal (MLM) stability studies are not a perfect surrogate for in vivo studies of metabolic clearance, they are the initial model system used to assess metabolic stability. Consequently, we explored the development of machine learning models that can enhance the probability of identifying compounds possessing MLM stability. Methods Published assays on MLM half-life values were identified in PubChem, reformatted, and curated to create a training set with 894 unique small molecules. These data were used to construct machine learning models assessed with internal cross-validation, external tests with a published set of antitubercular compounds, and independent validation with an additional diverse set of 571 compounds (PubChem data on percent metabolism). Results “Pruning” out the moderately unstable/moderately stable compounds from the training set produced models with superior predictive power. Bayesian models displayed the best predictive power for identifying compounds with a half-life ≥1 hour. Conclusions Our results suggest the pruning strategy may be of general benefit to improve test set enrichment and provide machine learning models with enhanced predictive value for the MLM stability of small organic molecules. This study represents the most exhaustive study to date of using machine learning approaches with MLM data from public sources. PMID:26415647
Predicting Mouse Liver Microsomal Stability with "Pruned" Machine Learning Models and Public Data.
Perryman, Alexander L; Stratton, Thomas P; Ekins, Sean; Freundlich, Joel S
2016-02-01
Mouse efficacy studies are a critical hurdle to advance translational research of potential therapeutic compounds for many diseases. Although mouse liver microsomal (MLM) stability studies are not a perfect surrogate for in vivo studies of metabolic clearance, they are the initial model system used to assess metabolic stability. Consequently, we explored the development of machine learning models that can enhance the probability of identifying compounds possessing MLM stability. Published assays on MLM half-life values were identified in PubChem, reformatted, and curated to create a training set with 894 unique small molecules. These data were used to construct machine learning models assessed with internal cross-validation, external tests with a published set of antitubercular compounds, and independent validation with an additional diverse set of 571 compounds (PubChem data on percent metabolism). "Pruning" out the moderately unstable / moderately stable compounds from the training set produced models with superior predictive power. Bayesian models displayed the best predictive power for identifying compounds with a half-life ≥1 h. Our results suggest the pruning strategy may be of general benefit to improve test set enrichment and provide machine learning models with enhanced predictive value for the MLM stability of small organic molecules. This study represents the most exhaustive study to date of using machine learning approaches with MLM data from public sources.
Chen, Guangchao; Li, Xuehua; Chen, Jingwen; Zhang, Ya-Nan; Peijnenburg, Willie J G M
2014-12-01
Biodegradation is the principal environmental dissipation process of chemicals. As such, it is a dominant factor determining the persistence and fate of organic chemicals in the environment, and is therefore of critical importance to chemical management and regulation. In the present study, the authors developed in silico methods assessing biodegradability based on a large heterogeneous set of 825 organic compounds, using the techniques of the C4.5 decision tree, the functional inner regression tree, and logistic regression. External validation was subsequently carried out by 2 independent test sets of 777 and 27 chemicals. As a result, the functional inner regression tree exhibited the best predictability with predictive accuracies of 81.5% and 81.0%, respectively, on the training set (825 chemicals) and test set I (777 chemicals). Performance of the developed models on the 2 test sets was subsequently compared with that of the Estimation Program Interface (EPI) Suite Biowin 5 and Biowin 6 models, which also showed a better predictability of the functional inner regression tree model. The model built in the present study exhibits a reasonable predictability compared with existing models while possessing a transparent algorithm. Interpretation of the mechanisms of biodegradation was also carried out based on the models developed. © 2014 SETAC.
Exploiting similarity in turbulent shear flows for turbulence modeling
NASA Technical Reports Server (NTRS)
Robinson, David F.; Harris, Julius E.; Hassan, H. A.
1992-01-01
It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.
Exploiting similarity in turbulent shear flows for turbulence modeling
NASA Astrophysics Data System (ADS)
Robinson, David F.; Harris, Julius E.; Hassan, H. A.
1992-12-01
It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.
Diagnostic Profiles: A Standard Setting Method for Use with a Cognitive Diagnostic Model
ERIC Educational Resources Information Center
Skaggs, Gary; Hein, Serge F.; Wilkins, Jesse L. M.
2016-01-01
This article introduces the Diagnostic Profiles (DP) standard setting method for setting a performance standard on a test developed from a cognitive diagnostic model (CDM), the outcome of which is a profile of mastered and not-mastered skills or attributes rather than a single test score. In the DP method, the key judgment task for panelists is a…
ERIC Educational Resources Information Center
Carter, Wesley
An instructor's manual and student activity guide on building a model greenhouse and growing plants are provided in this set of prevocational education materials which focuses on the vocational area of agriculture (ornamental horticulture). (This set of materials is one of ninety-two prevocational education sets arranged around a cluster of seven…
NASA Astrophysics Data System (ADS)
Beck, Hylke; de Roo, Ad; van Dijk, Albert; McVicar, Tim; Miralles, Diego; Schellekens, Jaap; Bruijnzeel, Sampurno; de Jeu, Richard
2015-04-01
Motivated by the lack of large-scale model parameter regionalization studies, a large set of 3328 small catchments (< 10000 km2) around the globe was used to set up and evaluate five model parameterization schemes at global scale. The HBV-light model was chosen because of its parsimony and flexibility to test the schemes. The catchments were calibrated against observed streamflow (Q) using an objective function incorporating both behavioral and goodness-of-fit measures, after which the catchment set was split into subsets of 1215 donor and 2113 evaluation catchments based on the calibration performance. The donor catchments were subsequently used to derive parameter sets that were transferred to similar grid cells based on a similarity measure incorporating climatic and physiographic characteristics, thereby producing parameter maps with global coverage. Overall, there was a lack of suitable donor catchments for mountainous and tropical environments. The schemes with spatially-uniform parameter sets (EXP2 and EXP3) achieved the worst Q estimation performance in the evaluation catchments, emphasizing the importance of parameter regionalization. The direct transfer of calibrated parameter sets from donor catchments to similar grid cells (scheme EXP1) performed best, although there was still a large performance gap between EXP1 and HBV-light calibrated against observed Q. The schemes with parameter sets obtained by simultaneously calibrating clusters of similar donor catchments (NC10 and NC58) performed worse than EXP1. The relatively poor Q estimation performance achieved by two (uncalibrated) macro-scale hydrological models suggests there is considerable merit in regionalizing the parameters of such models. The global HBV-light parameter maps and ancillary data are freely available via http://water.jrc.ec.europa.eu.
NASA Astrophysics Data System (ADS)
Toohey, M.; Krüger, K.; Bittner, M.; Timmreck, C.; Schmidt, H.
2014-12-01
Observations and simple theoretical arguments suggest that the Northern Hemisphere (NH) stratospheric polar vortex is stronger in winters following major volcanic eruptions. However, recent studies show that climate models forced by prescribed volcanic aerosol fields fail to reproduce this effect. We investigate the impact of volcanic aerosol forcing on stratospheric dynamics, including the strength of the NH polar vortex, in ensemble simulations with the Max Planck Institute Earth System Model. The model is forced by four different prescribed forcing sets representing the radiative properties of stratospheric aerosol following the 1991 eruption of Mt. Pinatubo: two forcing sets are based on observations, and are commonly used in climate model simulations, and two forcing sets are constructed based on coupled aerosol-climate model simulations. For all forcings, we find that simulated temperature and zonal wind anomalies in the NH high latitudes are not directly impacted by anomalous volcanic aerosol heating. Instead, high-latitude effects result from enhancements in stratospheric residual circulation, which in turn result, at least in part, from enhanced stratospheric wave activity. High-latitude effects are therefore much less robust than would be expected if they were the direct result of aerosol heating. Both observation-based forcing sets result in insignificant changes in vortex strength. For the model-based forcing sets, the vortex response is found to be sensitive to the structure of the forcing, with one forcing set leading to significant strengthening of the polar vortex in rough agreement with observation-based expectations. Differences in the dynamical response to the forcing sets imply that reproducing the polar vortex responses to past eruptions, or predicting the response to future eruptions, depends on accurate representation of the space-time structure of the volcanic aerosol forcing.
Modeling and analysis of energy quantization effects on single electron inverter performance
NASA Astrophysics Data System (ADS)
Dan, Surya Shankar; Mahapatra, Santanu
2009-08-01
In this paper, for the first time, the effects of energy quantization on single electron transistor (SET) inverter performance are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantization mainly changes the Coulomb blockade region and drain current of SET devices and thus affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new analytical model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. A compact expression is developed for a novel parameter quantization threshold which is introduced for the first time in this paper. Quantization threshold explicitly defines the maximum energy quantization that an SET inverter logic circuit can withstand before its noise margin falls below a specified tolerance level. It is found that SET inverter designed with CT:CG=1/3 (where CT and CG are tunnel junction and gate capacitances, respectively) offers maximum robustness against energy quantization.
A level set approach for shock-induced α-γ phase transition of RDX
NASA Astrophysics Data System (ADS)
Josyula, Kartik; Rahul; De, Suvranu
2018-02-01
We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.
Revell, Andrew D; Wang, Dechao; Perez-Elias, Maria-Jesus; Wood, Robin; Cogill, Dolphina; Tempelman, Hugo; Hamers, Raph L; Reiss, Peter; van Sighem, Ard I; Rehm, Catherine A; Pozniak, Anton; Montaner, Julio S G; Lane, H Clifford; Larder, Brendan A
2018-06-08
Optimizing antiretroviral drug combination on an individual basis can be challenging, particularly in settings with limited access to drugs and genotypic resistance testing. Here we describe our latest computational models to predict treatment responses, with or without a genotype, and compare their predictive accuracy with that of genotyping. Random forest models were trained to predict the probability of virological response to a new therapy introduced following virological failure using up to 50 000 treatment change episodes (TCEs) without a genotype and 18 000 TCEs including genotypes. Independent data sets were used to evaluate the models. This study tested the effects on model accuracy of relaxing the baseline data timing windows, the use of a new filter to exclude probable non-adherent cases and the addition of maraviroc, tipranavir and elvitegravir to the system. The no-genotype models achieved area under the receiver operator characteristic curve (AUC) values of 0.82 and 0.81 using the standard and relaxed baseline data windows, respectively. The genotype models achieved AUC values of 0.86 with the new non-adherence filter and 0.84 without. Both sets of models were significantly more accurate than genotyping with rules-based interpretation, which achieved AUC values of only 0.55-0.63, and were marginally more accurate than previous models. The models were able to identify alternative regimens that were predicted to be effective for the vast majority of cases in which the new regimen prescribed in the clinic failed. These latest global models predict treatment responses accurately even without a genotype and have the potential to help optimize therapy, particularly in resource-limited settings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; University of Primorska,; Turk, Dušan, E-mail: dusan.turk@ijs.si
2014-12-01
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. Theymore » utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.« less
The Thick Level-Set model for dynamic fragmentation
Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas
2017-01-04
The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.
Setting conservation management thresholds using a novel participatory modeling approach.
Addison, P F E; de Bie, K; Rumpff, L
2015-10-01
We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The Authors Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Metabolomics biomarkers to predict acamprosate treatment response in alcohol-dependent subjects.
Hinton, David J; Vázquez, Marely Santiago; Geske, Jennifer R; Hitschfeld, Mario J; Ho, Ada M C; Karpyak, Victor M; Biernacka, Joanna M; Choi, Doo-Sup
2017-05-31
Precision medicine for alcohol use disorder (AUD) allows optimal treatment of the right patient with the right drug at the right time. Here, we generated multivariable models incorporating clinical information and serum metabolite levels to predict acamprosate treatment response. The sample of 120 patients was randomly split into a training set (n = 80) and test set (n = 40) five independent times. Treatment response was defined as complete abstinence (no alcohol consumption during 3 months of acamprosate treatment) while nonresponse was defined as any alcohol consumption during this period. In each of the five training sets, we built a predictive model using a least absolute shrinkage and section operator (LASSO) penalized selection method and then evaluated the predictive performance of each model in the corresponding test set. The models predicted acamprosate treatment response with a mean sensitivity and specificity in the test sets of 0.83 and 0.31, respectively, suggesting our model performed well at predicting responders, but not non-responders (i.e. many non-responders were predicted to respond). Studies with larger sample sizes and additional biomarkers will expand the clinical utility of predictive algorithms for pharmaceutical response in AUD.
An experimental methodology for a fuzzy set preference model
NASA Technical Reports Server (NTRS)
Turksen, I. B.; Willson, Ian A.
1992-01-01
A flexible fuzzy set preference model first requires approximate methodologies for implementation. Fuzzy sets must be defined for each individual consumer using computer software, requiring a minimum of time and expertise on the part of the consumer. The amount of information needed in defining sets must also be established. The model itself must adapt fully to the subject's choice of attributes (vague or precise), attribute levels, and importance weights. The resulting individual-level model should be fully adapted to each consumer. The methodologies needed to develop this model will be equally useful in a new generation of intelligent systems which interact with ordinary consumers, controlling electronic devices through fuzzy expert systems or making recommendations based on a variety of inputs. The power of personal computers and their acceptance by consumers has yet to be fully utilized to create interactive knowledge systems that fully adapt their function to the user. Understanding individual consumer preferences is critical to the design of new products and the estimation of demand (market share) for existing products, which in turn is an input to management systems concerned with production and distribution. The question of what to make, for whom to make it and how much to make requires an understanding of the customer's preferences and the trade-offs that exist between alternatives. Conjoint analysis is a widely used methodology which de-composes an overall preference for an object into a combination of preferences for its constituent parts (attributes such as taste and price), which are combined using an appropriate combination function. Preferences are often expressed using linguistic terms which cannot be represented in conjoint models. Current models are also not implemented an individual level, making it difficult to reach meaningful conclusions about the cause of an individual's behavior from an aggregate model. The combination of complex aggregate models and vague linguistic preferences has greatly limited the usefulness and predictive validity of existing preference models. A fuzzy set preference model that uses linguistic variables and a fully interactive implementation should be able to simultaneously address these issues and substantially improve the accuracy of demand estimates. The parallel implementation of crisp and fuzzy conjoint models using identical data not only validates the fuzzy set model but also provides an opportunity to assess the impact of fuzzy set definitions and individual attribute choices implemented in the interactive methodology developed in this research. The generalized experimental tools needed for conjoint models can also be applied to many other types of intelligent systems.
Interpretable Decision Sets: A Joint Framework for Description and Prediction
Lakkaraju, Himabindu; Bach, Stephen H.; Jure, Leskovec
2016-01-01
One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model’s prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency. PMID:27853627
Luo, Shuang; Wei, Zongsu; Spinney, Richard; Villamena, Frederick A; Dionysiou, Dionysios D; Chen, Dong; Tang, Chong-Jian; Chai, Liyuan; Xiao, Ruiyang
2018-02-15
Sulfate radical anion (SO 4 •- ) and hydroxyl radical (OH) based advanced oxidation technologies has been extensively used for removal of aromatic contaminants (ACs) in waters. In this study, we investigated the Gibbs free energy (ΔG SET ∘ ) of the single electron transfer (SET) reactions for 76 ACs with SO 4 •- and OH, respectively. The result reveals that SO 4 •- possesses greater propensity to react with ACs through the SET channel than OH. We hypothesized that the electron distribution within the molecule plays an essential role in determining the ΔG SET ∘ and subsequent SET reactions. To test the hypothesis, a quantitative structure-activity relationship (QSAR) model was developed for predicting ΔG SET ∘ using the highest occupied molecular orbital energies (E HOMO ), a measure of electron distribution and donating ability. The standardized QSAR models are reported to be ΔG ° SET =-0.97×E HOMO - 181 and ΔG ° SET =-0.97×E HOMO - 164 for SO 4 •- and OH, respectively. The models were internally and externally validated to ensure robustness and predictability, and the application domain and limitations were discussed. The single-descriptor based models account for 95% of the variability for SO 4 •- and OH. These results provide the mechanistic insight into the SET reaction pathway of radical and non-radical bimolecular reactions, and have important applications for radical based oxidation technologies to remove target ACs in different waters. Copyright © 2017 Elsevier B.V. All rights reserved.
Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data.
A population-based model for priority setting across the care continuum and across modalities
Segal, Leonie; Mortimer, Duncan
2006-01-01
Background The Health-sector Wide (HsW) priority setting model is designed to shift the focus of priority setting away from 'program budgets' – that are typically defined by modality or disease-stage – and towards well-defined target populations with a particular disease/health problem. Methods The key features of the HsW model are i) a disease/health problem framework, ii) a sequential approach to covering the entire health sector, iii) comprehensiveness of scope in identifying intervention options and iv) the use of objective evidence. The HsW model redefines the unit of analysis over which priorities are set to include all mutually exclusive and complementary interventions for the prevention and treatment of each disease/health problem under consideration. The HsW model is therefore incompatible with the fragmented approach to priority setting across multiple program budgets that currently characterises allocation in many health systems. The HsW model employs standard cost-utility analyses and decision-rules with the aim of maximising QALYs contingent upon the global budget constraint for the set of diseases/health problems under consideration. It is recognised that the objective function may include non-health arguments that would imply a departure from simple QALY maximisation and that political constraints frequently limit degrees of freedom. In addressing these broader considerations, the HsW model can be modified to maximise value-weighted QALYs contingent upon the global budget constraint and any political constraints bearing upon allocation decisions. Results The HsW model has been applied in several contexts, recently to osteoarthritis, that has demonstrated both its practical application and its capacity to derive clear evidenced-based policy recommendations. Conclusion Comparisons with other approaches to priority setting, such as Programme Budgeting and Marginal Analysis (PBMA) and modality-based cost-effectiveness comparisons, as typified by Australia's Pharmaceutical Benefits Advisory Committee process for the listing of pharmaceuticals for government funding, demonstrate the value added by the HsW model notably in its greater likelihood of contributing to allocative efficiency. PMID:16566841
Chen, Xuewu; Wei, Ming; Wu, Jingxian; Hou, Xianyao
2014-01-01
Most traditional mode choice models are based on the principle of random utility maximization derived from econometric theory. Alternatively, mode choice modeling can be regarded as a pattern recognition problem reflected from the explanatory variables of determining the choices between alternatives. The paper applies the knowledge discovery technique of rough sets theory to model travel mode choices incorporating household and individual sociodemographics and travel information, and to identify the significance of each attribute. The study uses the detailed travel diary survey data of Changxing county which contains information on both household and individual travel behaviors for model estimation and evaluation. The knowledge is presented in the form of easily understood IF-THEN statements or rules which reveal how each attribute influences mode choice behavior. These rules are then used to predict travel mode choices from information held about previously unseen individuals and the classification performance is assessed. The rough sets model shows high robustness and good predictive ability. The most significant condition attributes identified to determine travel mode choices are gender, distance, household annual income, and occupation. Comparative evaluation with the MNL model also proves that the rough sets model gives superior prediction accuracy and coverage on travel mode choice modeling. PMID:25431585
Abstraction and model evaluation in category learning.
Vanpaemel, Wolf; Storms, Gert
2010-05-01
Thirty previously published data sets, from seminal category learning tasks, are reanalyzed using the varying abstraction model (VAM). Unlike a prototype-versus-exemplar analysis, which focuses on extreme levels of abstraction only, a VAM analysis also considers the possibility of partial abstraction. Whereas most data sets support no abstraction when only the extreme possibilities are considered, we show that evidence for abstraction can be provided using the broader view on abstraction provided by the VAM. The present results generalize earlier demonstrations of partial abstraction (Vanpaemel & Storms, 2008), in which only a small number of data sets was analyzed. Following the dominant modus operandi in category learning research, Vanpaemel and Storms evaluated the models on their best fit, a practice known to ignore the complexity of the models under consideration. In the present study, in contrast, model evaluation not only relies on the maximal likelihood, but also on the marginal likelihood, which is sensitive to model complexity. Finally, using a large recovery study, it is demonstrated that, across the 30 data sets, complexity differences between the models in the VAM family are small. This indicates that a (computationally challenging) complexity-sensitive model evaluation method is uncalled for, and that the use of a (computationally straightforward) complexity-insensitive model evaluation method is justified.
Can Geostatistical Models Represent Nature's Variability? An Analysis Using Flume Experiments
NASA Astrophysics Data System (ADS)
Scheidt, C.; Fernandes, A. M.; Paola, C.; Caers, J.
2015-12-01
The lack of understanding in the Earth's geological and physical processes governing sediment deposition render subsurface modeling subject to large uncertainty. Geostatistics is often used to model uncertainty because of its capability to stochastically generate spatially varying realizations of the subsurface. These methods can generate a range of realizations of a given pattern - but how representative are these of the full natural variability? And how can we identify the minimum set of images that represent this natural variability? Here we use this minimum set to define the geostatistical prior model: a set of training images that represent the range of patterns generated by autogenic variability in the sedimentary environment under study. The proper definition of the prior model is essential in capturing the variability of the depositional patterns. This work starts with a set of overhead images from an experimental basin that showed ongoing autogenic variability. We use the images to analyze the essential characteristics of this suite of patterns. In particular, our goal is to define a prior model (a minimal set of selected training images) such that geostatistical algorithms, when applied to this set, can reproduce the full measured variability. A necessary prerequisite is to define a measure of variability. In this study, we measure variability using a dissimilarity distance between the images. The distance indicates whether two snapshots contain similar depositional patterns. To reproduce the variability in the images, we apply an MPS algorithm to the set of selected snapshots of the sedimentary basin that serve as training images. The training images are chosen from among the initial set by using the distance measure to ensure that only dissimilar images are chosen. Preliminary investigations show that MPS can reproduce fairly accurately the natural variability of the experimental depositional system. Furthermore, the selected training images provide process information. They fall into three basic patterns: a channelized end member, a sheet flow end member, and one intermediate case. These represent the continuum between autogenic bypass or erosion, and net deposition.
A Comparison of different learning models used in Data Mining for Medical Data
NASA Astrophysics Data System (ADS)
Srimani, P. K.; Koti, Manjula Sanjay
2011-12-01
The present study aims at investigating the different Data mining learning models for different medical data sets and to give practical guidelines to select the most appropriate algorithm for a specific medical data set. In practical situations, it is absolutely necessary to take decisions with regard to the appropriate models and parameters for diagnosis and prediction problems. Learning models and algorithms are widely implemented for rule extraction and the prediction of system behavior. In this paper, some of the well-known Machine Learning(ML) systems are investigated for different methods and are tested on five medical data sets. The practical criteria for evaluating different learning models are presented and the potential benefits of the proposed methodology for diagnosis and learning are suggested.
Pelletier, Jon D.; Broxton, Patrick D.; Hazenberg, Pieter; ...
2016-01-22
Earth’s terrestrial near-subsurface environment can be divided into relatively porous layers of soil, intact regolith, and sedimentary deposits above unweathered bedrock. Variations in the thicknesses of these layers control the hydrologic and biogeochemical responses of landscapes. Currently, Earth System Models approximate the thickness of these relatively permeable layers above bedrock as uniform globally, despite the fact that their thicknesses vary systematically with topography, climate, and geology. To meet the need for more realistic input data for models, we developed a high-resolution gridded global data set of the average thicknesses of soil, intact regolith, and sedimentary deposits within each 30 arcsecmore » (~ 1 km) pixel using the best available data for topography, climate, and geology as input. Our data set partitions the global land surface into upland hillslope, upland valley bottom, and lowland landscape components and uses models optimized for each landform type to estimate the thicknesses of each subsurface layer. On hillslopes, the data set is calibrated and validated using independent data sets of measured soil thicknesses from the U.S. and Europe and on lowlands using depth to bedrock observations from groundwater wells in the U.S. As a result, we anticipate that the data set will prove useful as an input to regional and global hydrological and ecosystems models.« less
BUMPER v1.0: a Bayesian user-friendly model for palaeo-environmental reconstruction
NASA Astrophysics Data System (ADS)
Holden, Philip B.; Birks, H. John B.; Brooks, Stephen J.; Bush, Mark B.; Hwang, Grace M.; Matthews-Bird, Frazer; Valencia, Bryan G.; van Woesik, Robert
2017-02-01
We describe the Bayesian user-friendly model for palaeo-environmental reconstruction (BUMPER), a Bayesian transfer function for inferring past climate and other environmental variables from microfossil assemblages. BUMPER is fully self-calibrating, straightforward to apply, and computationally fast, requiring ˜ 2 s to build a 100-taxon model from a 100-site training set on a standard personal computer. We apply the model's probabilistic framework to generate thousands of artificial training sets under ideal assumptions. We then use these to demonstrate the sensitivity of reconstructions to the characteristics of the training set, considering assemblage richness, taxon tolerances, and the number of training sites. We find that a useful guideline for the size of a training set is to provide, on average, at least 10 samples of each taxon. We demonstrate general applicability to real data, considering three different organism types (chironomids, diatoms, pollen) and different reconstructed variables. An identically configured model is used in each application, the only change being the input files that provide the training-set environment and taxon-count data. The performance of BUMPER is shown to be comparable with weighted average partial least squares (WAPLS) in each case. Additional artificial datasets are constructed with similar characteristics to the real data, and these are used to explore the reasons for the differing performances of the different training sets.
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.; Broxton, Patrick D.; Hazenberg, Pieter; Zeng, Xubin; Troch, Peter A.; Niu, Guo-Yue; Williams, Zachary; Brunke, Michael A.; Gochis, David
2016-03-01
Earth's terrestrial near-subsurface environment can be divided into relatively porous layers of soil, intact regolith, and sedimentary deposits above unweathered bedrock. Variations in the thicknesses of these layers control the hydrologic and biogeochemical responses of landscapes. Currently, Earth System Models approximate the thickness of these relatively permeable layers above bedrock as uniform globally, despite the fact that their thicknesses vary systematically with topography, climate, and geology. To meet the need for more realistic input data for models, we developed a high-resolution gridded global data set of the average thicknesses of soil, intact regolith, and sedimentary deposits within each 30 arcsec (˜1 km) pixel using the best available data for topography, climate, and geology as input. Our data set partitions the global land surface into upland hillslope, upland valley bottom, and lowland landscape components and uses models optimized for each landform type to estimate the thicknesses of each subsurface layer. On hillslopes, the data set is calibrated and validated using independent data sets of measured soil thicknesses from the U.S. and Europe and on lowlands using depth to bedrock observations from groundwater wells in the U.S. We anticipate that the data set will prove useful as an input to regional and global hydrological and ecosystems models. This article was corrected on 2 FEB 2016. See the end of the full text for details.
Numerical Modelling of Three-Fluid Flow Using The Level-set Method
NASA Astrophysics Data System (ADS)
Li, Hongying; Lou, Jing; Shang, Zhi
2014-11-01
This work presents a numerical model for simulation of three-fluid flow involving two different moving interfaces. These interfaces are captured using the level-set method via two different level-set functions. A combined formulation with only one set of conservation equations for the whole physical domain, consisting of the three different immiscible fluids, is employed. Numerical solution is performed on a fixed mesh using the finite volume method. Surface tension effect is incorporated using the Continuum Surface Force model. Validation of the present model is made against available results for stratified flow and rising bubble in a container with a free surface. Applications of the present model are demonstrated by a variety of three-fluid flow systems including (1) three-fluid stratified flow, (2) two-fluid stratified flow carrying the third fluid in the form of drops and (3) simultaneous rising and settling of two drops in a stationary third fluid. The work is supported by a Thematic and Strategic Research from A*STAR, Singapore (Ref. #: 1021640075).
Li, Xiaomeng; Yang, Zhuo
2017-01-01
As a sustainable transportation mode, high-speed railway (HSR) has become an efficient way to meet the huge travel demand. However, due to the high acquisition and maintenance cost, it is impossible to build enough infrastructure and purchase enough train-sets. Great efforts are required to improve the transport capability of HSR. The utilization efficiency of train-sets (carrying tools of HSR) is one of the most important factors of the transport capacity of HSR. In order to enhance the utilization efficiency of the train-sets, this paper proposed a train-set circulation optimization model to minimize the total connection time. An innovative two-stage approach which contains segments generation and segments combination was designed to solve this model. In order to verify the feasibility of the proposed approach, an experiment was carried out in the Beijing-Tianjin passenger dedicated line, to fulfill a 174 trips train diagram. The model results showed that compared with the traditional Ant Colony Algorithm (ACA), the utilization efficiency of train-sets can be increased from 43.4% (ACA) to 46.9% (Two-Stage), and 1 train-set can be saved up to fulfill the same transportation tasks. The approach proposed in the study is faster and more stable than the traditional ones, by using which, the HSR staff can draw up the train-sets circulation plan more quickly and the utilization efficiency of the HSR system is also improved. PMID:28489933
Studies on the population dynamics of a rumor-spreading model in online social networks
NASA Astrophysics Data System (ADS)
Dong, Suyalatu; Fan, Feng-Hua; Huang, Yong-Chang
2018-02-01
This paper sets up a rumor spreading model in online social networks based on the European fox rabies SIR model. The model considers the impact of changing number of online social network users, combines the transmission dynamics to set up a population dynamics of rumor spreading model in online social networks. Simulation is carried out on online social network, and results show that the new rumor spreading model is in accordance with the real propagation characteristics in online social networks.
Implementing Model-Check for Employee and Management Satisfaction
NASA Technical Reports Server (NTRS)
Jones, Corey; LaPha, Steven
2013-01-01
This presentation will discuss methods to which ModelCheck can be implemented to not only improve model quality, but also satisfy both employees and management through different sets of quality checks. This approach allows a standard set of modeling practices to be upheld throughout a company, with minimal interaction required by the end user. The presenter will demonstrate how to create multiple ModelCheck standards, preventing users from evading the system, and how it can improve the quality of drawings and models.
A Comprehensive Multi-Level Model for Campus-Based Leadership Education
ERIC Educational Resources Information Center
Rosch, David; Spencer, Gayle L.; Hoag, Beth L.
2017-01-01
Within this application brief, we propose a comprehensive model for mapping the shape and optimizing the effectiveness of leadership education in campus-wide university settings. The four-level model is highlighted by inclusion of a philosophy statement detailing the values and purpose of leadership education on campus, a set of skills and…
Testing an Instructional Model in a University Educational Setting from the Student's Perspective
ERIC Educational Resources Information Center
Betoret, Fernando Domenech
2006-01-01
We tested a theoretical model that hypothesized relationships between several variables from input, process and product in an educational setting, from the university student's perspective, using structural equation modeling. In order to carry out the analysis, we measured in sequential order the input (referring to students' personal…
Relativistic proton-nucleus scattering and one-boson-exchange models
NASA Technical Reports Server (NTRS)
Maung, Khin Maung; Gross, Franz; Tjon, J. A.; Townsend, L. W.; Wallace, S. J.
1993-01-01
Relativistic p-(Ca-40) elastic scattering observables are calculated using four sets of relativistic NN amplitudes obtained from different one-boson-exchange (OBE) models. The first two sets are based upon a relativistic equation in which one particle is on mass shell and the other two sets are obtained from a quasipotential reduction of the Bethe-Salpeter equation. Results at 200, 300, and 500 MeV are presented for these amplitudes. Differences between the predictions of these models provide a study of the uncertainty in constructing Dirac optical potentials from OBE-based NN amplitudes.
A discrete scattering series representation for lattice embedded models of chain cyclization
NASA Astrophysics Data System (ADS)
Fraser, Simon J.; Winnik, Mitchell A.
1980-01-01
In this paper we develop a lattice based model of chain cyclization in the presence of a set of occupied sites V in the lattice. We show that within the approximation of a Markovian chain propagator the effect of V on the partition function for the system can be written as a time-ordered exponential series in which V behaves like a scattering potential and chainlength is the timelike parameter. The discrete and finite nature of this model allows us to obtain rigorous upper and lower bounds to the series limit. We adapt these formulas to calculation of the partition functions and cyclization probabilities of terminally and globally cyclizing chains. Two classes of cyclization are considered: in the first model the target set H may be visited repeatedly (the Markovian model); in the second case vertices in H may be visited at most once(the non-Markovian or taboo model). This formulation depends on two fundamental combinatorial structures, namely the inclusion-exclusion principle and the set of subsets of a set. We have tried to interpret these abstract structures with physical analogies throughout the paper.
Skill Assessment in Ocean Biological Data Assimilation
NASA Technical Reports Server (NTRS)
Gregg, Watson W.; Friedrichs, Marjorie A. M.; Robinson, Allan R.; Rose, Kenneth A.; Schlitzer, Reiner; Thompson, Keith R.; Doney, Scott C.
2008-01-01
There is growing recognition that rigorous skill assessment is required to understand the ability of ocean biological models to represent ocean processes and distributions. Statistical analysis of model results with observations represents the most quantitative form of skill assessment, and this principle serves as well for data assimilation models. However, skill assessment for data assimilation requires special consideration. This is because there are three sets of information in the free-run model, data, and the assimilation model, which uses Data assimilation information from both the flee-run model and the data. Intercom parison of results among the three sets of information is important and useful for assessment, but is not conclusive since the three information sets are intertwined. An independent data set is necessary for an objective determination. Other useful measures of ocean biological data assimilation assessment include responses of unassimilated variables to the data assimilation, performance outside the prescribed region/time of interest, forecasting, and trend analysis. Examples of each approach from the literature are provided. A comprehensive list of ocean biological data assimilation and their applications of skill assessment, in both ecosystem/biogeochemical and fisheries efforts, is summarized.
The effectiveness of flipped classroom learning model in secondary physics classroom setting
NASA Astrophysics Data System (ADS)
Prasetyo, B. D.; Suprapto, N.; Pudyastomo, R. N.
2018-03-01
The research aimed to describe the effectiveness of flipped classroom learning model on secondary physics classroom setting during Fall semester of 2017. The research object was Secondary 3 Physics group of Singapore School Kelapa Gading. This research was initiated by giving a pre-test, followed by treatment setting of the flipped classroom learning model. By the end of the learning process, the pupils were given a post-test and questionnaire to figure out pupils' response to the flipped classroom learning model. Based on the data analysis, 89% of pupils had passed the minimum criteria of standardization. The increment level in the students' mark was analysed by normalized n-gain formula, obtaining a normalized n-gain score of 0.4 which fulfil medium category range. Obtains from the questionnaire distributed to the students that 93% of students become more motivated to study physics and 89% of students were very happy to carry on hands-on activity based on the flipped classroom learning model. Those three aspects were used to generate a conclusion that applying flipped classroom learning model in Secondary Physics Classroom setting is effectively applicable.
Scobbie, Lesley; Dixon, Diane; Wyke, Sally
2011-05-01
Setting and achieving goals is fundamental to rehabilitation practice but has been criticized for being a-theoretical and the key components of replicable goal-setting interventions are not well established. To describe the development of a theory-based goal setting practice framework for use in rehabilitation settings and to detail its component parts. Causal modelling was used to map theories of behaviour change onto the process of setting and achieving rehabilitation goals, and to suggest the mechanisms through which patient outcomes are likely to be affected. A multidisciplinary task group developed the causal model into a practice framework for use in rehabilitation settings through iterative discussion and implementation with six patients. Four components of a goal-setting and action-planning practice framework were identified: (i) goal negotiation, (ii) goal identification, (iii) planning, and (iv) appraisal and feedback. The variables hypothesized to effect change in patient outcomes were self-efficacy and action plan attainment. A theory-based goal setting practice framework for use in rehabilitation settings is described. The framework requires further development and systematic evaluation in a range of rehabilitation settings.
A Comparison of Two Balance Calibration Model Building Methods
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Ulbrich, Norbert
2007-01-01
Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.
Inferring microbial interaction networks from metagenomic data using SgLV-EKF algorithm.
Alshawaqfeh, Mustafa; Serpedin, Erchin; Younes, Ahmad Bani
2017-03-27
Inferring the microbial interaction networks (MINs) and modeling their dynamics are critical in understanding the mechanisms of the bacterial ecosystem and designing antibiotic and/or probiotic therapies. Recently, several approaches were proposed to infer MINs using the generalized Lotka-Volterra (gLV) model. Main drawbacks of these models include the fact that these models only consider the measurement noise without taking into consideration the uncertainties in the underlying dynamics. Furthermore, inferring the MIN is characterized by the limited number of observations and nonlinearity in the regulatory mechanisms. Therefore, novel estimation techniques are needed to address these challenges. This work proposes SgLV-EKF: a stochastic gLV model that adopts the extended Kalman filter (EKF) algorithm to model the MIN dynamics. In particular, SgLV-EKF employs a stochastic modeling of the MIN by adding a noise term to the dynamical model to compensate for modeling uncertainties. This stochastic modeling is more realistic than the conventional gLV model which assumes that the MIN dynamics are perfectly governed by the gLV equations. After specifying the stochastic model structure, we propose the EKF to estimate the MIN. SgLV-EKF was compared with two similarity-based algorithms, one algorithm from the integral-based family and two regression-based algorithms, in terms of the achieved performance on two synthetic data-sets and two real data-sets. The first data-set models the randomness in measurement data, whereas, the second data-set incorporates uncertainties in the underlying dynamics. The real data-sets are provided by a recent study pertaining to an antibiotic-mediated Clostridium difficile infection. The experimental results demonstrate that SgLV-EKF outperforms the alternative methods in terms of robustness to measurement noise, modeling errors, and tracking the dynamics of the MIN. Performance analysis demonstrates that the proposed SgLV-EKF algorithm represents a powerful and reliable tool to infer MINs and track their dynamics.
Twinn, Sheila; Thompson, David R; Lopez, Violeta; Lee, Diana T F; Shiu, Ann T Y
2005-01-01
Different factors have been shown to influence the development of models of advanced nursing practice (ANP) in primary-care settings. Although ANP is being developed in hospitals in Hong Kong, China, it remains undeveloped in primary care and little is known about the factors determining the development of such a model. The aims of the present study were to investigate the contribution of different models of nursing practice to the care provided in primary-care settings in Hong Kong, and to examine the determinants influencing the development of a model of ANP in such settings. A multiple case study design was selected using both qualitative and quantitative methods of data collection. Sampling methods reflected the population groups and stage of the case study. Sampling included a total population of 41 nurses from whom a secondary volunteer sample was drawn for face-to-face interviews. In each case study, a convenience sample of 70 patients were recruited, from whom 10 were selected purposively for a semi-structured telephone interview. An opportunistic sample of healthcare professionals was also selected. The within-case and cross-case analysis demonstrated four major determinants influencing the development of ANP: (1) current models of nursing practice; (2) the use of skills mix; (3) the perceived contribution of ANP to patient care; and (4) patients' expectations of care. The level of autonomy of individual nurses was considered particularly important. These determinants were used to develop a model of ANP for a primary-care setting. In conclusion, although the findings highlight the complexity determining the development and implementation of ANP in primary care, the proposed model suggests that definitions of advanced practice are appropriate to a range of practice models and cultural settings. However, the findings highlight the importance of assessing the effectiveness of such models in terms of cost and long-term patient outcomes.
A unified tensor level set for image segmentation.
Wang, Bin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong
2010-06-01
This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt- and pepper-type noise. Second, considering the local geometrical features, e.g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.
BUMPER: the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction
NASA Astrophysics Data System (ADS)
Holden, Phil; Birks, John; Brooks, Steve; Bush, Mark; Hwang, Grace; Matthews-Bird, Frazer; Valencia, Bryan; van Woesik, Robert
2017-04-01
We describe the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction (BUMPER), a Bayesian transfer function for inferring past climate and other environmental variables from microfossil assemblages. The principal motivation for a Bayesian approach is that the palaeoenvironment is treated probabilistically, and can be updated as additional data become available. Bayesian approaches therefore provide a reconstruction-specific quantification of the uncertainty in the data and in the model parameters. BUMPER is fully self-calibrating, straightforward to apply, and computationally fast, requiring 2 seconds to build a 100-taxon model from a 100-site training-set on a standard personal computer. We apply the model's probabilistic framework to generate thousands of artificial training-sets under ideal assumptions. We then use these to demonstrate both the general applicability of the model and the sensitivity of reconstructions to the characteristics of the training-set, considering assemblage richness, taxon tolerances, and the number of training sites. We demonstrate general applicability to real data, considering three different organism types (chironomids, diatoms, pollen) and different reconstructed variables. In all of these applications an identically configured model is used, the only change being the input files that provide the training-set environment and taxon-count data.
Integrated performance and reliability specification for digital avionics systems
NASA Technical Reports Server (NTRS)
Brehm, Eric W.; Goettge, Robert T.
1995-01-01
This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.
NASA Astrophysics Data System (ADS)
Dillon, Chris
Built upon remote sensing and GIS littoral zone characterization methodologies of the past decade, a series of loosely coupled models aimed to test, compare and synthesize multi-beam SONAR (MBES), Airborne LiDAR Bathymetry (ALB), and satellite based optical data sets in the Gulf of St. Lawrence, Canada, eco-region. Bathymetry and relative intensity metrics for the MBES and ALB data sets were run through a quantitative and qualitative comparison, which included outputs from the Benthic Terrain Modeller (BTM) tool. Substrate classification based on relative intensities of respective data sets and textural indices generated using grey level co-occurrence matrices (GLCM) were investigated. A spatial modelling framework built in ArcGIS(TM) for the derivation of bathymetric data sets from optical satellite imagery was also tested for proof of concept and validation. Where possible, efficiencies and semi-automation for repeatable testing was achieved using ArcGIS(TM) ModelBuilder. The findings from this study could assist future decision makers in the field of coastal management and hydrographic studies. Keywords: Seafloor terrain characterization, Benthic Terrain Modeller (BTM), Multi-beam SONAR, Airborne LiDAR Bathymetry, Satellite Derived Bathymetry, ArcGISTM ModelBuilder, Textural analysis, Substrate classification.
Jones, Andrew S; Taktak, Azzam G F; Helliwell, Timothy R; Fenton, John E; Birchall, Martin A; Husband, David J; Fisher, Anthony C
2006-06-01
The accepted method of modelling and predicting failure/survival, Cox's proportional hazards model, is theoretically inferior to neural network derived models for analysing highly complex systems with large datasets. A blinded comparison of the neural network versus the Cox's model in predicting survival utilising data from 873 treated patients with laryngeal cancer. These were divided randomly and equally into a training set and a study set and Cox's and neural network models applied in turn. Data were then divided into seven sets of binary covariates and the analysis repeated. Overall survival was not significantly different on Kaplan-Meier plot, or with either test model. Although the network produced qualitatively similar results to Cox's model it was significantly more sensitive to differences in survival curves for age and N stage. We propose that neural networks are capable of prediction in systems involving complex interactions between variables and non-linearity.
Chan, Lai Gwen; Carvalhal, Adriana
2015-01-01
To describe a model of HIV psychiatry used in an urban hospital in Toronto and examine it against current literature. Using a narrative method, we elaborate on how this model delivers care across many different settings and the integral roles that the HIV psychiatrist plays in each of these settings. This is articulated against a backdrop of existing literature regarding models of HIV care. This model is an example of an integrated model as opposed to a traditional consultation-liaison model and is able to deliver seamless care while remaining focused on patient-centric care. An HIV psychiatrist delivers seamless and patient-centric care by journeying with patients across the healthcare spectrum and playing different roles in different care settings. Copyright © 2015 Elsevier Inc. All rights reserved.
Polar versus Cartesian velocity models for maneuvering target tracking with IMM
NASA Astrophysics Data System (ADS)
Laneuville, Dann
This paper compares various model sets in different IMM filters for the maneuvering target tracking problem. The aim is to see whether we can improve the tracking performance of what is certainly the most widely used model set in the literature for the maneuvering target tracking problem: a Nearly Constant Velocity model and a Nearly Coordinated Turn model. Our new challenger set consists of a mixed Cartesian position and polar velocity state vector to describe the uniform motion segments and is augmented with the turn rate to obtain the second model for the maneuvering segments. This paper also gives a general procedure to discretize up to second order any non-linear continuous time model with linear diffusion. Comparative simulations on an air defence scenario with a 2D radar, show that this new approach improves significantly the tracking performance in this case.
Synthesis of geophysical data with space-acquired imagery: a review
Hastings, David A.
1983-01-01
Statistical correlation has been used to determine the applicability of specific data sets to the development of geologic or exploration models. Various arithmetic functions have proven useful in developing models from such data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Y; Yu, J; Yeung, V
Purpose: Artificial neural networks (ANN) can be used to discover complex relations within datasets to help with medical decision making. This study aimed to develop an ANN method to predict two-year overall survival of patients with peri-ampullary cancer (PAC) following resection. Methods: Data were collected from 334 patients with PAC following resection treated in our institutional pancreatic tumor registry between 2006 and 2012. The dataset contains 14 variables including age, gender, T-stage, tumor differentiation, positive-lymph-node ratio, positive resection margins, chemotherapy, radiation therapy, and tumor histology.After censoring for two-year survival analysis, 309 patients were left, of which 44 patients (∼15%) weremore » randomly selected to form testing set. The remaining 265 cases were randomly divided into training set (211 cases, ∼80% of 265) and validation set (54 cases, ∼20% of 265) for 20 times to build 20 ANN models. Each ANN has one hidden layer with 5 units. The 20 ANN models were ranked according to their concordance index (c-index) of prediction on validation sets. To further improve prediction, the top 10% of ANN models were selected, and their outputs averaged for prediction on testing set. Results: By random division, 44 cases in testing set and the remaining 265 cases have approximately equal two-year survival rates, 36.4% and 35.5% respectively. The 20 ANN models, which were trained and validated on the 265 cases, yielded mean c-indexes as 0.59 and 0.63 on validation sets and the testing set, respectively. C-index was 0.72 when the two best ANN models (top 10%) were used in prediction on testing set. The c-index of Cox regression analysis was 0.63. Conclusion: ANN improved survival prediction for patients with PAC. More patient data and further analysis of additional factors may be needed for a more robust model, which will help guide physicians in providing optimal post-operative care. This project was supported by PA CURE Grant.« less
NASA Astrophysics Data System (ADS)
Kappas, M.; Propastin, P.; Degener, J.; Renchin, T.
2014-12-01
Long-term global data sets of Leaf Area Index (LAI) are important for monitoring global vegetation dynamics. LAI indicating phenological development of vegetation is an important state variable for modeling land surface processes. The comparison of long-term data sets is based on two recently available data sets both derived from AVHRR time series. The LAI 3g data set introduced by Zaichun Zhu et al. (2013) is developed from the new improved third generation Global Inventory Modeling and Mapping Studies (GIMMS) Normalized Difference Vegetation Index (NDVI3g) and best-quality MODIS LAI data. The second long-term data set is based on the 8 km spatial resolution GIMMS-AVHRR data (GGRS-data set by Propastin et al. 2012). The GGRS-LAI product uses a three-dimensional physical radiative transfer model which establishes relationship between LAI, vegetation fractional cover and given patterns of surface reflectance, view-illumination conditions and optical properties of vegetation. The model incorporates a number of site/region specific parameters, including the vegetation architecture variables such as leaf angle distribution, clumping index, and light extinction coefficient. For the application of the model to Kazakhstan, the vegetation architecture variables were computed at the local (pixel) level based on extensive field surveys of the biophysical properties of vegetation in representative grassland areas of Kazakhstan. The comparison of both long-term data sets will be used to interpret their quality for scientific research in other disciplines. References:Propastin, P., Kappas, M. (2012). Retrieval of coarse-resolution leaf area index over the Republic of Kazakhstan using NOAA AVHRR satellite data and ground measurements," Remote Sensing, vol. 4, no. 1, pp. 220-246. Zaichun Zhu, Jian Bi, Yaozhong Pan, Sangram Ganguly, Alessandro Anav, Liang Xu, Arindam Samanta, Shilong Piao, Ramakrishna R. Nemani and Ranga B. Myneni (2013). Global Data Sets of Vegetation Leaf Area Index (LAI)3g and Fraction of photosynthetically Active Radiation (FPAR)3g Derived from Global Inventory Modeling and Mapping Studies (GIMMS) Normalized Difference Vegetation Index (NDVI3g) for the Period 1981 to 2011. Remote Sens. 2013, 5, 927-948; doi:10.3390/rs5020927
Analysis of Co-Tunneling Current in Fullerene Single-Electron Transistor
NASA Astrophysics Data System (ADS)
KhademHosseini, Vahideh; Dideban, Daryoosh; Ahmadi, MohammadTaghi; Ismail, Razali
2018-05-01
Single-electron transistors (SETs) are nano devices which can be used in low-power electronic systems. They operate based on coulomb blockade effect. This phenomenon controls single-electron tunneling and it switches the current in SET. On the other hand, co-tunneling process increases leakage current, so it reduces main current and reliability of SET. Due to co-tunneling phenomenon, main characteristics of fullerene SET with multiple islands are modelled in this research. Its performance is compared with silicon SET and consequently, research result reports that fullerene SET has lower leakage current and higher reliability than silicon counterpart. Based on the presented model, lower co-tunneling current is achieved by selection of fullerene as SET island material which leads to smaller value of the leakage current. Moreover, island length and the number of islands can affect on co-tunneling and then they tune the current flow in SET.
Use of fuzzy sets in modeling of GIS objects
NASA Astrophysics Data System (ADS)
Mironova, Yu N.
2018-05-01
The paper discusses modeling and methods of data visualization in geographic information systems. Information processing in Geoinformatics is based on the use of models. Therefore, geoinformation modeling is a key in the chain of GEODATA processing. When solving problems, using geographic information systems often requires submission of the approximate or insufficient reliable information about the map features in the GIS database. Heterogeneous data of different origin and accuracy have some degree of uncertainty. In addition, not all information is accurate: already during the initial measurements, poorly defined terms and attributes (e.g., "soil, well-drained") are used. Therefore, there are necessary methods for working with uncertain requirements, classes, boundaries. The author proposes using spatial information fuzzy sets. In terms of a characteristic function, a fuzzy set is a natural generalization of ordinary sets, when one rejects the binary nature of this feature and assumes that it can take any value in the interval.
Modeling individualized coefficient alpha to measure quality of test score data.
Liu, Molei; Hu, Ming; Zhou, Xiao-Hua
2018-05-23
Individualized coefficient alpha is defined. It is item and subject specific and is used to measure the quality of test score data with heterogenicity among the subjects and items. A regression model is developed based on 3 sets of generalized estimating equations. The first set of generalized estimating equation models the expectation of the responses, the second set models the response's variance, and the third set is proposed to estimate the individualized coefficient alpha, defined and used to measure individualized internal consistency of the responses. We also use different techniques to extend our method to handle missing data. Asymptotic property of the estimators is discussed, based on which inference on the coefficient alpha is derived. Performance of our method is evaluated through simulation study and real data analysis. The real data application is from a health literacy study in Hunan province of China. Copyright © 2018 John Wiley & Sons, Ltd.
Multiple Versus Single Set Validation of Multivariate Models to Avoid Mistakes.
Harrington, Peter de Boves
2018-01-02
Validation of multivariate models is of current importance for a wide range of chemical applications. Although important, it is neglected. The common practice is to use a single external validation set for evaluation. This approach is deficient and may mislead investigators with results that are specific to the single validation set of data. In addition, no statistics are available regarding the precision of a derived figure of merit (FOM). A statistical approach using bootstrapped Latin partitions is advocated. This validation method makes an efficient use of the data because each object is used once for validation. It was reviewed a decade earlier but primarily for the optimization of chemometric models this review presents the reasons it should be used for generalized statistical validation. Average FOMs with confidence intervals are reported and powerful, matched-sample statistics may be applied for comparing models and methods. Examples demonstrate the problems with single validation sets.
A modeling analysis program for the JPL table mountain Io sodium cloud
NASA Technical Reports Server (NTRS)
Smyth, W. H.; Goldberg, B. A.
1985-01-01
Progress and achievements in the first year are discussed in three main areas: (1) review and assessment of the massive JPL Table Mountain Io sodium cloud data set, (2) formulation and execution of a plan to perform further processing of this data set, and (3) initiation of modeling activities. The complete 1976-79 and 1981 data sets are reviewed. Particular emphasis is placed on the superior 1981 Region B/C images which provide a rich base of information for studying the structure and escape of gases from Io as well as possible east-west and magnetic longitudinal asymmetries in the plasma torus. A data processing plan is developed and is undertaken by the Multimission Image Processing Laboratory of JPL for the purpose of providing a more refined and complete data set for our modeling studies in the second year. Modeling priorities are formulated and initial progress in achieving these goals is reported.
Development of a new model to engage patients and clinicians in setting research priorities.
Pollock, Alex; St George, Bridget; Fenton, Mark; Crowe, Sally; Firkins, Lester
2014-01-01
Equitable involvement of patients and clinicians in setting research and funding priorities is ethically desirable and can improve the quality, relevance and implementation of research. Survey methods used in previous priority setting projects to gather treatment uncertainties may not be sufficient to facilitate responses from patients and their lay carers for some health care topics. We aimed to develop a new model to engage patients and clinicians in setting research priorities relating to life after stroke, and to explore the use of this model within a James Lind Alliance (JLA) priority setting project. We developed a model to facilitate involvement through targeted engagement and assisted involvement (FREE TEA model). We implemented both standard surveys and the FREE TEA model to gather research priorities (treatment uncertainties) from people affected by stroke living in Scotland. We explored and configured the number of treatment uncertainties elicited from different groups by the two approaches. We gathered 516 treatment uncertainties from stroke survivors, carers and health professionals. We achieved approximately equal numbers of contributions; 281 (54%) from stroke survivors/carers; 235 (46%) from health professionals. For stroke survivors and carers, 98 (35%) treatment uncertainties were elicited from the standard survey and 183 (65%) at FREE TEA face-to-face visits. This contrasted with the health professionals for whom 198 (84%) were elicited from the standard survey and only 37 (16%) from FREE TEA visits. The FREE TEA model has implications for future priority setting projects and user-involvement relating to populations of people with complex health needs. Our results imply that reliance on standard surveys may result in poor and unrepresentative involvement of patients, thereby favouring the views of health professionals.
Forecast and analysis of the cosmological redshift drift.
Lazkoz, Ruth; Leanizbarrutia, Iker; Salzano, Vincenzo
2018-01-01
The cosmological redshift drift could lead to the next step in high-precision cosmic geometric observations, becoming a direct and irrefutable test for cosmic acceleration. In order to test the viability and possible properties of this effect, also called Sandage-Loeb (SL) test, we generate a model-independent mock data set in order to compare its constraining power with that of the future mock data sets of Type Ia Supernovae (SNe) and Baryon Acoustic Oscillations (BAO). The performance of those data sets is analyzed by testing several cosmological models with the Markov chain Monte Carlo (MCMC) method, both independently as well as combining all data sets. Final results show that, in general, SL data sets allow for remarkable constraints on the matter density parameter today [Formula: see text] on every tested model, showing also a great complementarity with SNe and BAO data regarding dark energy parameters.
Boreland, B; Clement, G; Kunze, H
2015-08-01
After reviewing set selection and memory model dynamical system neural networks, we introduce a neural network model that combines set selection with partial memories (stored memories on subsets of states in the network). We establish that feasible equilibria with all states equal to ± 1 correspond to answers to a particular set theoretic problem. We show that KenKen puzzles can be formulated as a particular case of this set theoretic problem and use the neural network model to solve them; in addition, we use a similar approach to solve Sudoku. We illustrate the approach in examples. As a heuristic experiment, we use online or print resources to identify the difficulty of the puzzles and compare these difficulties to the number of iterations used by the appropriate neural network solver, finding a strong relationship. Copyright © 2015 Elsevier Ltd. All rights reserved.
Good, Andrew C; Hermsmeier, Mark A
2007-01-01
Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.
A Dual Hesitant Fuzzy Multigranulation Rough Set over Two-Universe Model for Medical Diagnoses
Zhang, Chao; Li, Deyu; Yan, Yan
2015-01-01
In medical science, disease diagnosis is one of the difficult tasks for medical experts who are confronted with challenges in dealing with a lot of uncertain medical information. And different medical experts might express their own thought about the medical knowledge base which slightly differs from other medical experts. Thus, to solve the problems of uncertain data analysis and group decision making in disease diagnoses, we propose a new rough set model called dual hesitant fuzzy multigranulation rough set over two universes by combining the dual hesitant fuzzy set and multigranulation rough set theories. In the framework of our study, both the definition and some basic properties of the proposed model are presented. Finally, we give a general approach which is applied to a decision making problem in disease diagnoses, and the effectiveness of the approach is demonstrated by a numerical example. PMID:26858772
Bidirectional Active Learning: A Two-Way Exploration Into Unlabeled and Labeled Data Set.
Zhang, Xiao-Yu; Wang, Shupeng; Yun, Xiaochun
2015-12-01
In practical machine learning applications, human instruction is indispensable for model construction. To utilize the precious labeling effort effectively, active learning queries the user with selective sampling in an interactive way. Traditional active learning techniques merely focus on the unlabeled data set under a unidirectional exploration framework and suffer from model deterioration in the presence of noise. To address this problem, this paper proposes a novel bidirectional active learning algorithm that explores into both unlabeled and labeled data sets simultaneously in a two-way process. For the acquisition of new knowledge, forward learning queries the most informative instances from unlabeled data set. For the introspection of learned knowledge, backward learning detects the most suspiciously unreliable instances within the labeled data set. Under the two-way exploration framework, the generalization ability of the learning model can be greatly improved, which is demonstrated by the encouraging experimental results.
NASA Astrophysics Data System (ADS)
Ziemba, Alexander; El Serafy, Ghada
2016-04-01
Ecological modeling and water quality investigations are complex processes which can require a high level of parameterization and a multitude of varying data sets in order to properly execute the model in question. Since models are generally complex, their calibration and validation can benefit from the application of data and information fusion techniques. The data applied to ecological models comes from a wide range of sources such as remote sensing, earth observation, and in-situ measurements, resulting in a high variability in the temporal and spatial resolution of the various data sets available to water quality investigators. It is proposed that effective fusion into a comprehensive singular set will provide a more complete and robust data resource with which models can be calibrated, validated, and driven by. Each individual product contains a unique valuation of error resulting from the method of measurement and application of pre-processing techniques. The uncertainty and error is further compounded when the data being fused is of varying temporal and spatial resolution. In order to have a reliable fusion based model and data set, the uncertainty of the results and confidence interval of the data being reported must be effectively communicated to those who would utilize the data product or model outputs in a decision making process[2]. Here we review an array of data fusion techniques applied to various remote sensing, earth observation, and in-situ data sets whose domains' are varied in spatial and temporal resolution. The data sets examined are combined in a manner so that the various classifications, complementary, redundant, and cooperative, of data are all assessed to determine classification's impact on the propagation and compounding of error. In order to assess the error of the fused data products, a comparison is conducted with data sets containing a known confidence interval and quality rating. We conclude with a quantification of the performance of the data fusion techniques and a recommendation on the feasibility of applying of the fused products in operating forecast systems and modeling scenarios. The error bands and confidence intervals derived can be used in order to clarify the error and confidence of water quality variables produced by prediction and forecasting models. References [1] F. Castanedo, "A Review of Data Fusion Techniques", The Scientific World Journal, vol. 2013, pp. 1-19, 2013. [2] T. Keenan, M. Carbone, M. Reichstein and A. Richardson, "The model-data fusion pitfall: assuming certainty in an uncertain world", Oecologia, vol. 167, no. 3, pp. 587-597, 2011.
Fuzzy Energy and Reserve Co-optimization With High Penetration of Renewable Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Cong; Botterud, Audun; Zhou, Zhi
In this study, we propose a fuzzy-based energy and reserve co-optimization model with consideration of high penetration of renewable energy. Under the assumption of a fixed uncertainty set of renewables, a two-stage robust model is proposed for clearing energy and reserves in the first stage and checking the feasibility and robustness of re-dispatches in the second stage. Fuzzy sets and their membership functions are introduced into the optimization model to represent the satisfaction degree of the variable uncertainty sets. The lower bound of the uncertainty set is expressed as fuzzy membership functions. The solutions are obtained by transforming the fuzzymore » mathematical programming formulation into traditional mixed integer linear programming problems.« less
Fuzzy Energy and Reserve Co-optimization With High Penetration of Renewable Energy
Liu, Cong; Botterud, Audun; Zhou, Zhi; ...
2016-10-21
In this study, we propose a fuzzy-based energy and reserve co-optimization model with consideration of high penetration of renewable energy. Under the assumption of a fixed uncertainty set of renewables, a two-stage robust model is proposed for clearing energy and reserves in the first stage and checking the feasibility and robustness of re-dispatches in the second stage. Fuzzy sets and their membership functions are introduced into the optimization model to represent the satisfaction degree of the variable uncertainty sets. The lower bound of the uncertainty set is expressed as fuzzy membership functions. The solutions are obtained by transforming the fuzzymore » mathematical programming formulation into traditional mixed integer linear programming problems.« less
Design and Testing of Flight Control Laws on the RASCAL Research Helicopter
NASA Technical Reports Server (NTRS)
Frost, Chad R.; Hindson, William S.; Moralez. Ernesto, III; Tucker, George E.; Dryfoos, James B.
2001-01-01
Two unique sets of flight control laws were designed, tested and flown on the Army/NASA Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) JUH-60A Black Hawk helicopter. The first set of control laws used a simple rate feedback scheme, intended to facilitate the first flight and subsequent flight qualification of the RASCAL research flight control system. The second set of control laws comprised a more sophisticated model-following architecture. Both sets of flight control laws were developed and tested extensively using desktop-to-flight modeling, analysis, and simulation tools. Flight test data matched the model predicted responses well, providing both evidence and confidence that future flight control development for RASCAL will be efficient and accurate.
Analysis of brain patterns using temporal measures
Georgopoulos, Apostolos
2015-08-11
A set of brain data representing a time series of neurophysiologic activity acquired by spatially distributed sensors arranged to detect neural signaling of a brain (such as by the use of magnetoencephalography) is obtained. The set of brain data is processed to obtain a dynamic brain model based on a set of statistically-independent temporal measures, such as partial cross correlations, among groupings of different time series within the set of brain data. The dynamic brain model represents interactions between neural populations of the brain occurring close in time, such as with zero lag, for example. The dynamic brain model can be analyzed to obtain the neurophysiologic assessment of the brain. Data processing techniques may be used to assess structural or neurochemical brain pathologies.
NASA Astrophysics Data System (ADS)
Sadi, Maryam
2018-01-01
In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.
Assessing the accuracy and stability of variable selection ...
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti
Tang, T Y; Prytherch, D R; Walsh, S R; Athanassoglou, V; Seppi, V; Sadat, U; Lees, T A; Varty, K; Boyle, J R
2009-01-01
VBHOM (Vascular Biochemistry and Haematology Outcome Models) adopts the approach of using a minimum data set to model outcome and has been previously shown to be feasible after index arterial operations. This study attempts to model mortality following lower limb amputation for critical limb ischaemia using the VBHOM concept. A binary logistic regression model of risk of mortality was built using National Vascular Database items that contained the complete data required by the model from 269 admissions for lower limb amputation. The subset of NVD data items used were urea, creatinine, sodium, potassium, haemoglobin, white cell count, age on and mode of admission. This model was applied prospectively to a test set of data (n=269), which were not part of the original training set to develop the predictor equation. Outcome following lower limb amputation could be described accurately using the same model. The overall mean predicted risk of mortality was 32%, predicting 86 deaths. Actual number of deaths was 86 (chi(2)=8.05, 8 d.f., p=0.429; no evidence of lack of fit). The model demonstrated adequate discrimination (c-index=0.704). VBHOM provides a single unified model that allows good prediction of surgical mortality in this high risk group of individuals. It uses a small, simple and objective clinical data set that may also simplify comparative audit within vascular surgery.
A ricin forensic profiling approach based on a complex set of biomarkers.
Fredriksson, Sten-Åke; Wunschel, David S; Lindström, Susanne Wiklund; Nilsson, Calle; Wahl, Karen; Åstot, Crister
2018-08-15
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1-PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods and robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved. Copyright © 2018 Elsevier B.V. All rights reserved.
Emergency residential care settings: A model for service assessment and design.
Graça, João; Calheiros, Maria Manuela; Patrício, Joana Nunes; Magalhães, Eunice Vieira
2018-02-01
There have been calls for uncovering the "black box" of residential care services, with a particular need for research focusing on emergency care settings for children and youth in danger. In fact, the strikingly scant empirical attention that these settings have received so far contrasts with the role that they often play as gateway into the child welfare system. To answer these calls, this work presents and tests a framework for assessing a service model in residential emergency care. It comprises seven studies which address a set of different focal areas (e.g., service logic model; care experiences), informants (e.g., case records; staff; children/youth), and service components (e.g., case assessment/evaluation; intervention; placement/referral). Drawing on this process-consultation approach, the work proposes a set of key challenges for emergency residential care in terms of service improvement and development, and calls for further research targeting more care units and different types of residential care services. These findings offer a contribution to inform evidence-based practice and policy in service models of residential care. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wang, Monan; Zhang, Kai; Yang, Ning
2018-04-09
To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.
Li, Mingjie; Zhou, Ping; Wang, Hong; ...
2017-09-19
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Mingjie; Zhou, Ping; Wang, Hong
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
Mixed models approaches for joint modeling of different types of responses.
Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert
2016-01-01
In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.
Lee, Seung Yup; Skolnick, Jeffrey
2007-07-01
To improve the accuracy of TASSER models especially in the limit where threading provided template alignments are of poor quality, we have developed the TASSER(iter) algorithm which uses the templates and contact restraints from TASSER generated models for iterative structure refinement. We apply TASSER(iter) to a large benchmark set of 2,773 nonhomologous single domain proteins that are < or = 200 in length and that cover the PDB at the level of 35% pairwise sequence identity. Overall, TASSER(iter) models have a smaller global average RMSD of 5.48 A compared to 5.81 A RMSD of the original TASSER models. Classifying the targets by the level of prediction difficulty (where Easy targets have a good template with a corresponding good threading alignment, Medium targets have a good template but a poor alignment, and Hard targets have an incorrectly identified template), TASSER(iter) (TASSER) models have an average RMSD of 4.15 A (4.35 A) for the Easy set and 9.05 A (9.52 A) for the Hard set. The largest reduction of average RMSD is for the Medium set where the TASSER(iter) models have an average global RMSD of 5.67 A compared to 6.72 A of the TASSER models. Seventy percent of the Medium set TASSER(iter) models have a smaller RMSD than the TASSER models, while 63% of the Easy and 60% of the Hard TASSER models are improved by TASSER(iter). For the foldable cases, where the targets have a RMSD to the native <6.5 A, TASSER(iter) shows obvious improvement over TASSER models: For the Medium set, it improves the success rate from 57.0 to 67.2%, followed by the Hard targets where the success rate improves from 32.0 to 34.8%, with the smallest improvement in the Easy targets from 82.6 to 84.0%. These results suggest that TASSER(iter) can provide more reliable predictions for targets of Medium difficulty, a range that had resisted improvement in the quality of protein structure predictions. 2007 Wiley-Liss, Inc.
Data driven model generation based on computational intelligence
NASA Astrophysics Data System (ADS)
Gemmar, Peter; Gronz, Oliver; Faust, Christophe; Casper, Markus
2010-05-01
The simulation of discharges at a local gauge or the modeling of large scale river catchments are effectively involved in estimation and decision tasks of hydrological research and practical applications like flood prediction or water resource management. However, modeling such processes using analytical or conceptual approaches is made difficult by both complexity of process relations and heterogeneity of processes. It was shown manifold that unknown or assumed process relations can principally be described by computational methods, and that system models can automatically be derived from observed behavior or measured process data. This study describes the development of hydrological process models using computational methods including Fuzzy logic and artificial neural networks (ANN) in a comprehensive and automated manner. Methods We consider a closed concept for data driven development of hydrological models based on measured (experimental) data. The concept is centered on a Fuzzy system using rules of Takagi-Sugeno-Kang type which formulate the input-output relation in a generic structure like Ri : IFq(t) = lowAND...THENq(t+Δt) = ai0 +ai1q(t)+ai2p(t-Δti1)+ai3p(t+Δti2)+.... The rule's premise part (IF) describes process states involving available process information, e.g. actual outlet q(t) is low where low is one of several Fuzzy sets defined over variable q(t). The rule's conclusion (THEN) estimates expected outlet q(t + Δt) by a linear function over selected system variables, e.g. actual outlet q(t), previous and/or forecasted precipitation p(t ?Δtik). In case of river catchment modeling we use head gauges, tributary and upriver gauges in the conclusion part as well. In addition, we consider temperature and temporal (season) information in the premise part. By creating a set of rules R = {Ri|(i = 1,...,N)} the space of process states can be covered as concise as necessary. Model adaptation is achieved by finding on optimal set A = (aij) of conclusion parameters with respect to a defined rating function and experimental data. To find A, we use for example a linear equation solver and RMSE-function. In practical process models, the number of Fuzzy sets and the according number of rules is fairly low. Nevertheless, creating the optimal model requires some experience. Therefore, we improved this development step by methods for automatic generation of Fuzzy sets, rules, and conclusions. Basically, the model achievement depends to a great extend on the selection of the conclusion variables. It is the aim that variables having most influence on the system reaction being considered and superfluous ones being neglected. At first, we use Kohonen maps, a specialized ANN, to identify relevant input variables from the large set of available system variables. A greedy algorithm selects a comprehensive set of dominant and uncorrelated variables. Next, the premise variables are analyzed with clustering methods (e.g. Fuzzy-C-means) and Fuzzy sets are then derived from cluster centers and outlines. The rule base is automatically constructed by permutation of the Fuzzy sets of the premise variables. Finally, the conclusion parameters are calculated and the total coverage of the input space is iteratively tested with experimental data, rarely firing rules are combined and coarse coverage of sensitive process states results in refined Fuzzy sets and rules. Results The described methods were implemented and integrated in a development system for process models. A series of models has already been built e.g. for rainfall-runoff modeling or for flood prediction (up to 72 hours) in river catchments. The models required significantly less development effort and showed advanced simulation results compared to conventional models. The models can be used operationally and simulation takes only some minutes on a standard PC e.g. for a gauge forecast (up to 72 hours) for the whole Mosel (Germany) river catchment.
An algorithm for deriving core magnetic field models from the Swarm data set
NASA Astrophysics Data System (ADS)
Rother, Martin; Lesur, Vincent; Schachtschneider, Reyko
2013-11-01
In view of an optimal exploitation of the Swarm data set, we have prepared and tested software dedicated to the determination of accurate core magnetic field models and of the Euler angles between the magnetic sensors and the satellite reference frame. The dedicated core field model estimation is derived directly from the GFZ Reference Internal Magnetic Model (GRIMM) inversion and modeling family. The data selection techniques and the model parameterizations are similar to what were used for the derivation of the second (Lesur et al., 2010) and third versions of GRIMM, although the usage of observatory data is not planned in the framework of the application to Swarm. The regularization technique applied during the inversion process smoothes the magnetic field model in time. The algorithm to estimate the Euler angles is also derived from the CHAMP studies. The inversion scheme includes Euler angle determination with a quaternion representation for describing the rotations. It has been built to handle possible weak time variations of these angles. The modeling approach and software have been initially validated on a simple, noise-free, synthetic data set and on CHAMP vector magnetic field measurements. We present results of test runs applied to the synthetic Swarm test data set.
Data Sets from Major NCI Initiaves
The NCI Data Catalog includes links to data collections produced by major NCI initiatives and other widely used data sets, including animal models, human tumor cell lines, epidemiology data sets, genomics data sets from TCGA, TARGET, COSMIC, GSK, NCI60.
Be a Healthy Role Model for Children: 10 Tips for Setting Good Examples
... model for children 10 tips for setting good examples You are the most important influence on your ... make mealtime a family time! 1 show by example Eat vegetables, fruits, and whole grains with meals ...
NASA Astrophysics Data System (ADS)
Schachtschneider, R.; Rother, M.; Lesur, V.
2013-12-01
We introduce a method that enables us to account for existing correlations between Gauss coefficients in core field modelling. The information about the correlations are obtained from a highly accurate field model based on CHAMP data, e.g. the GRIMM-3 model. We compute the covariance matrices of the geomagnetic field, the secular variation, and acceleration up to degree 18 and use these in the regularization scheme of the core field inversion. For testing our method we followed two different approaches by applying it to two different synthetic satellite data sets. The first is a short data set with a time span of only three months. Here we test how the information about correlations help to obtain an accurate model when only very little information are available. The second data set is a large one covering several years. In this case, besides reducing the residuals in general, we focus on the improvement of the model near the boundaries of the data set where the accerelation is generally more difficult to handle. In both cases the obtained covariance matrices are included in the damping scheme of the regularization. That way information from scales that could otherwise not be resolved by the data can be extracted. We show that by using this technique we are able to improve the models of the field and the secular variation for both, the short and the long term data set, compared to approaches using more conventional regularization techniques.
A modeling approach to compare ΣPCB concentrations between congener-specific analyses
Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.
2017-01-01
Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time.
Modeling Preferential Admissions at Elite Liberal Arts Colleges
ERIC Educational Resources Information Center
Cockburn, Sally; Hewitt, Gordon; Kelly, Timothy
2013-01-01
This paper presents the results of a model that simulates the effects of varying preferential admissions policies on the academic profile of a set of 35 small liberal arts colleges. An underlying assumption is that all schools in the set use the same ratio of preferential to non-preferential admissions. The model predicts that even drastic changes…
Gary Bentrup
2001-01-01
Collaborative planning processes have become increasingly popular for addressing environmental planning issues, resulting in a number of conceptual models for collaboration. A model proposed by Selin and Chavez suggests that collaboration emerges from a series of antecedents and then proceeds sequentially through problem-setting, direction-setting, implementation, and...
Addressing HIV in the School Setting: Application of a School Change Model
ERIC Educational Resources Information Center
Walsh, Audra St. John; Chenneville, Tiffany
2013-01-01
This paper describes best practices for responding to youth with human immunodeficiency virus (HIV) in the school setting through the application of a school change model designed by the World Health Organization. This model applies a whole school approach and includes four levels that span the continuum from universal prevention to direct…
Smoking and Cancers: Case-Robust Analysis of a Classic Data Set
ERIC Educational Resources Information Center
Bentler, Peter M.; Satorra, Albert; Yuan, Ke-Hai
2009-01-01
A typical structural equation model is intended to reproduce the means, variances, and correlations or covariances among a set of variables based on parameter estimates of a highly restricted model. It is not widely appreciated that the sample statistics being modeled can be quite sensitive to outliers and influential observations, leading to bias…
Afshin Pourmokhtarian; Charles T. Driscoll; John L. Campbell; Katharine Hayhoe; Anne M. K. Stoner
2016-01-01
Assessments of future climate change impacts on ecosystems typically rely on multiple climate model projections, but often utilize only one downscaling approach trained on one set of observations. Here, we explore the extent to which modeled biogeochemical responses to changing climate are affected by the selection of the climate downscaling method and training...
Evaluation of the TBET model for potential improvement of southern P indices
USDA-ARS?s Scientific Manuscript database
Due to a shortage of available phosphorus (P) loss data sets, simulated data from a quantitative P transport model could be used to evaluate a P-index. However, the model would need to accurately predict the P loss data sets that are available. The objective of this study was to compare predictions ...
NASA Technical Reports Server (NTRS)
Merenyi, E.; Miller, J. S.; Singer, R. B.
1992-01-01
The linear mixing model approach was successfully applied to data sets of various natures. In these sets, the measured radiance could be assumed to be a linear combination of radiance contributions. The present work is an attempt to analyze a spectral image of Mars with linear mixing modeling.
Use of Total Possibilistic Uncertainty as a Measure of Students' Modelling Capacities
ERIC Educational Resources Information Center
Voskoglou, Michael Gr.
2010-01-01
We represent the main stages of the process of mathematical modelling as fuzzy sets in the set of the linguistic labels of negligible, low intermediate, high and complete success by students in each of these stages and we use the total possibilistic uncertainty as a measure of students' modelling capacities. A classroom experiment is also…
Wang, Wenyi; Kim, Marlene T.; Sedykh, Alexander
2015-01-01
Purpose Experimental Blood–Brain Barrier (BBB) permeability models for drug molecules are expensive and time-consuming. As alternative methods, several traditional Quantitative Structure-Activity Relationship (QSAR) models have been developed previously. In this study, we aimed to improve the predictivity of traditional QSAR BBB permeability models by employing relevant public bio-assay data in the modeling process. Methods We compiled a BBB permeability database consisting of 439 unique compounds from various resources. The database was split into a modeling set of 341 compounds and a validation set of 98 compounds. Consensus QSAR modeling workflow was employed on the modeling set to develop various QSAR models. A five-fold cross-validation approach was used to validate the developed models, and the resulting models were used to predict the external validation set compounds. Furthermore, we used previously published membrane transporter models to generate relevant transporter profiles for target compounds. The transporter profiles were used as additional biological descriptors to develop hybrid QSAR BBB models. Results The consensus QSAR models have R2=0.638 for fivefold cross-validation and R2=0.504 for external validation. The consensus model developed by pooling chemical and transporter descriptors showed better predictivity (R2=0.646 for five-fold cross-validation and R2=0.526 for external validation). Moreover, several external bio-assays that correlate with BBB permeability were identified using our automatic profiling tool. Conclusions The BBB permeability models developed in this study can be useful for early evaluation of new compounds (e.g., new drug candidates). The combination of chemical and biological descriptors shows a promising direction to improve the current traditional QSAR models. PMID:25862462
Research on Turbofan Engine Model above Idle State Based on NARX Modeling Approach
NASA Astrophysics Data System (ADS)
Yu, Bing; Shu, Wenjun
2017-03-01
The nonlinear model for turbofan engine above idle state based on NARX is studied. Above all, the data sets for the JT9D engine from existing model are obtained via simulation. Then, a nonlinear modeling scheme based on NARX is proposed and several models with different parameters are built according to the former data sets. Finally, the simulations have been taken to verify the precise and dynamic performance the models, the results show that the NARX model can well reflect the dynamics characteristic of the turbofan engine with high accuracy.